title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Scalable Kernel Inverse Optimization
Accept (poster)
Summary: The paper presents an innovative approach to inverse optimization using kernel methods. Inverse optimization (IO) aims to learn the unknown objective function of an expert decision-maker from past data by determining the optimization goal given the optimal solution, which is the reverse of traditional optimization. The authors extend the hypothesis class of IO objective functions to a reproducing kernel Hilbert space (RKHS), thereby enhancing features to an infinite-dimensional space. They demonstrate a variant of the representer theorem for a specific training loss, reformulating the problem to a finite-dimensional convex optimization. To address scalability issues with kernel methods, the paper proposes the Sequential Selection Optimization (SSO) algorithm, which selectively optimizes components of the decision variable, improving efficiency and scalability while ensuring convergence to the same solution as the KIO model. The KIO model's generalization capabilities and the SSO algorithm's effectiveness are validated through learning-from-demonstration tasks within the MuJoCo benchmark. Strengths: The inverse optimization is extended to the Reproducing Kernel Hilbert Space (RKHS), which improves the complexity and expressiveness of the objective function. This innovative method can handle more complex decision problems. The sequential selection optimization (SSO) algorithm is proposed to solve the scalability problem of the kernel method. The SSO algorithm improves the training efficiency of the model and enables the KIO model to process large-scale data sets. The article not only proves the effectiveness of the KIO model in theory, but also verifies its practical application effect through the learning demonstration task in the MuJoCo benchmark, demonstrating the generalization ability of the model. Weaknesses: The nonlinear and high-dimensional nature of kernel methods makes the models less interpretable. A clear understanding and interpretation of the optimization model may be required. The performance of the KIO model depends on the selection of kernel functions and hyperparameters. Different kernel functions and hyperparameter settings can significantly affect the results and often need to be adjusted through methods such as cross-validation, which increases the complexity of model training. So I hope the author can add more details in this part Technical Quality: 3 Clarity: 2 Questions for Authors: Inverse optimization methods require high-quality demonstration data to learn the target function. If there is noise or error in the demonstration data, what impact will this have on the effectiveness of the article? How to overcome this defect, because obtaining high-quality data may not be so easy Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and questions. Please find our response below. **The nonlinear and high-dimensional nature of kernel methods makes the models less interpretable. A clear understanding and interpretation of the optimization model may be required.** We agree with the reviewer that using kernel methods makes the model less interpretable, compared to a simple standard IO model. However, for complex learning tasks, in order to learn a high-performance policy, it is necessary to use richer hypothesis classes like the kernel-based IO model proposed in our paper. This can be seen in Table 1, where IO without kernels achieves low performance compared to KIO and the benchmarks BC(TD3+BC) and BC(CQL), which are Neural Network-based policies. **The performance of the KIO model depends on the selection of kernel functions and hyperparameters. Different kernel functions and hyperparameter settings can significantly affect the results and often need to be adjusted through methods such as cross-validation, which increases the complexity of model training. So I hope the author can add more details in this part.** Please refer to the global rebuttal for our extended analysis of different kernel functions. **Inverse optimization methods require high-quality demonstration data to learn the target function. If there is noise or error in the demonstration data, what impact will this have on the effectiveness of the article? How to overcome this defect, because obtaining high-quality data may not be so easy** We agree with the reviewer that the quality of the demonstration can have an impact on the quality of the learned policy. However, Inverse Optimization methods, in particular, the ones based on loss function can be quite robust to noisy data (e.g., reference [31]). In fact, the data we used in our experiments is noisy, due to internal noise during the action sampling phase of the expert agents. Moreover, we also show results when using data from "medium" quality agents, which further showcases the effectiveness of our KIO approach, even with medium-quality, noisy data. Nonetheless, we conducted an experiment using one of the datasets with two types of noise and reported the final scores. Please see the general rebuttal for the results. - - - We hope that we have addressed all the concerns raised. Please let us know if you have any further comments or questions. --- Rebuttal Comment 1.1: Comment: Thanks for your reply, my concerns are addressed. I will raise my score.
Summary: This paper extend the hypothesis class of Inverse Optimization functions to RKHSs, there by the feature mappings could lie in a infinite dimensional space. This paper also discuss about the scalability issue and then proposed a SSO algorithm to the proposed KIO model. Strengths: Clarity: This paper is well-written and easy to follow. This paper investigates the kernel method into the existing IO framework. This paper also provides the SSO method for scalability, heuristic for choosing coordinates and warm-up trick. This brings some new insights to the optimization community. Weaknesses: The choice of kernels is usually problem-dependent. It will be better to discuss some different types of kernels. Besides, the hyperparameter tuning of kernels should be reported. Please report the standard deviation in the experiments. Technical Quality: 3 Clarity: 3 Questions for Authors: None. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. Please find our response below. **The choice of kernels is usually problem-dependent. It will be better to discuss some different types of kernels. Besides, the hyperparameter tuning of kernels should be reported.** Please refer to the global rebuttal for our extended analysis of different kernel functions. Beyond the hyperparameters reported in Appendix Section B, we did not conduct an extensive hyperparameter search for kernel functions. **Please report the standard deviation in the experiments.** Please refer to the global rebuttal for the extended results that include the standard deviations. - - - We hope that we have addressed all the concerns raised. Please let us know if you have any further comments or questions. --- Rebuttal Comment 1.1: Comment: Thanks for your reply, my concerns are addressed. I will raise my score.
Summary: This paper concerns the topic of Inverse Optimization (IO) which deals with learning the objective function of the decision maker given previous data from an expert decision maker. By kernelizing the objective function, they extend the framework to features from a potentially infinite-dimensional space. The main contributions of this paper is a scalable SDP formulation of the kernel inverse optimization problem and a Sequential Selection Optimization algorithm that works by selecting a batch of variables at random or according to a KKT-based heuristic to optimizer over. The effectiveness of the SSO algorithm is shown using experimentally using imitation learning tasks from the MuJoCo benchmark. Strengths: The mains strength of this paper is the presentation of a scalable SDP formulation of the kernel optimization problem and a thorough experimental evaluation across a variety of MuJoCo benchmark tasks which includes an ablation study to evaluate different combinations of variable selection and warm-up strategies. Weaknesses: The main weakness is the lack of convergence guarantees for Sequential Selection Optimization (SSO) algorithm for the given different variable selection strategies. However, this is not a major drawback as the main contribution is a scalable SDP formulation for the kernel inverse optimization problem. The authors used Gaussian kernel for all experiments but it would be good to have more discussion about how type of kernel influences the performance of the SSO algorithm or quality of the solutions. There is also a lack of detail about how the baseline methods BC(TD3+BC) and BC(CQL) and Teacher were evaluated (e.g. hyperparameters, amount of data used and number of iterations to converge). There is a lack of convergence plots for SSO and other baselines in Table 1 and 2. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Section 4.2, how does one concatenate the optimal solutions of the n solved small problems to form an initial guess? Have you evaluated different possible ways to form an initial guess? 2. Have you considered evaluating your SSO algorithm on different types of kernel functions? 3. For Table 1, you report only the amount of data the KIO+SSO method used but you didn't report how much data the BC(TD3+BC) and BC(CQL) offline reinforcement learning algorithms used. Is there a reason for this? 4. For Table 1, why are the scores for IO so uniformly low compared to KIO? Is it possible that IO and KIO may perform similarly in simpler imitation learning tasks? 5. For Table 2, it is quite interesting to see that SCS and SSO converged to more or less the same objective value and it would be interesting to see how many iterations (and time) it took for SCS to converge vs. how many iterations for SSO to converge. This could be better represented in a plot. The convergence could be bad in the beginning due to low quality of the initial guess and then improve as the algorithm converges. Its fairly obvious that the scores will be the same range for all trials if the objective value is the same, so it doesn't help to report objective value and score side-by-side. 6. Have you considered reporting the speed of convergence for KIO + SSO method vs. BC(TD3+BC) and BC(CQL) offline reinforcement learning algorithms in terms of a plot of objective value vs. # of iterations or objective value vs. time? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors acknowledge the limitations of their work, which are (1) lack of convergence guarantees for Sequential Selection Optimization (SSO) algorithm for the given different variable selection strategies and (2) the large memory requirements. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and questions. Please find our response below. **In Section 4.2, how does one concatenate the optimal solutions of the n solved small problems to form an initial guess? Have you evaluated different possible ways to form an initial guess?** In our approach, each subproblem independently solves the optimization problem (Problem 10 in the paper) using non-overlapping subsets of the dataset, $\mathcal{D}_j \subset \mathcal{D}$. This process yields learned parameter values for $\Lambda_i$ and $\Gamma_i$, each associated with a data point in $\mathcal{D}_j$. We then aggregate these parameter values to construct an initialization for the problem encompassing the complete dataset, $\mathcal{D}$. We appreciate your highlighting the significance of initialization, as our experiments confirmed that it critically impacts the performance of the SSO algorithm. In the ablation studies Section 5.2, we also experimented with zero initialization for the parameters, referred in Figure 2 as "Heuristic" without WarmUp. Compared to the initialization strategy proposed in Section 4.2 (that is, WarmUp), this method often resulted in slower convergence. Investigating different initialization strategies presents a promising direction for future work. **Have you considered evaluating your SSO algorithm on different types of kernel functions?** Please refer to the global rebuttal for our extended analysis of different kernel functions. **For Table 1, you report only the amount of data the KIO+SSO method used but you didn't report how much data the BC(TD3+BC) and BC(CQL) offline reinforcement learning algorithms used. Is there a reason for this?** We agree with the suggestion of specifying the dataset size used by each algorithm in table 1. For clarity, we provide the information regarding the dataset sizes to the global rebuttal. **For Table 1, why are the scores for IO so uniformly low compared to KIO? Is it possible that IO and KIO may perform similarly in simpler imitation learning tasks?** In Table 1, the scores for IO are low due to the complexity of the MuJoCo tasks. In other words, the hypothesis class used by standard IO techniques is not rich enough to learn an effective policy for such imitation learning tasks. This is due to its reliance on predefined feature spaces, which may not capture the intricacies of more sophisticated environments. In contrast, Kernel Inverse Optimization (KIO) inherently leverages the flexibility of kernel methods to operate in high-dimensional spaces, enabling it to adapt more effectively to the complexity of larger imitation learning problems. As our results demonstrate, KIO consistently outperforms traditional IO in these challenging settings by mitigating the need for manual feature engineering and allowing for more robust learning from data. Indeed, in simpler imitation learning tasks, traditional IO can perform quite competitively, e.g., for the control tasks in reference [2] in the paper. For such tasks, KIO methods may not outperform traditional IO by a large margin, mainly due to the lack of necessity of complex policies to effectively tackle these simple tasks. **It would be interesting to see how many iterations (and time) it took for SCS to converge vs. how many iterations for SSO to converge.** In our experiments, we employed the Splitting Conic Solver (SCS) as a black-box optimizer to solve Problem 10 using the 5k/10k data points in a single run. In contrast, the Sequential Selection Optimization (SSO) algorithm is an iterative algorithm, that runs SCS for the sub-problems defined over $\mathcal{D}_j$, with samples selected based on the KKT-based heuristic outlined in Section 4.1. This is why we only reported the error-iteration curves for SSO across different tasks. **Its fairly obvious that the scores will be the same range for all trials if the objective value is the same, so it doesn't help to report objective value and score side-by-side.** We reported the objective and evaluation scores side-by-side to illustrate that the solution found by SSO not only minimizes the error but also achieves high evaluation scores. This is crucial for assessing performance in imitation learning tasks due to the possibility of compounding errors in dynamical tasks, leading to low evaluation scores even when the training error (i.e., objective value) is low (e.g., see Ross et al. "A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning", 2011). **Have you considered reporting the speed of convergence for KIO + SSO method vs. BC(TD3+BC) and BC(CQL)offline reinforcement learning algorithms in terms of a plot of objective value vs. # of iterations or objective value vs. time?** We agree that the speed of convergence is an important factor to consider when comparing SSO with the benchmarks. However, since we used the final results reported in references [17] and [20] for the imitation learning algorithms (BC(TD3+BC) and BC(CQL)), we do not have access to the detailed convergence data needed to generate a plot of objective value versus iterations or time. To provide such a comparison, we would need to re-implement the algorithms and replicate the results from these papers, which is beyond the scope of our current study. - - - We hope that we have addressed all the concerns raised. Please let us know if you have any further comments or questions. --- Rebuttal 2: Comment: I would like to thank the authors for their detailed answers to my comments and questions. Good to see that an experimental comparison of different kernel types: linear, RBF and Gaussian and for clarifying about the dataset size. I would suggest that the mentioned experiments regarding different initialization strategies be added to the paper in a table, just to show that it has been considered, and to say that extensive experimentation with initialization strategies is subject of future work. If submitting elsewhere, I would suggest also trying KIO with simpler tasks where IO performs reasonably well for two reasons: (1) it increases the range of experiments, strengthening the paper and showing how performance of KIO varies with complexity of the tasks and (2) it makes it look less suspicious since any reader may ask why scores for IO are uniformly lower. For now, simply mentioning that IO may be competitive in simpler tasks is sufficient. My concerns have been sufficiently addressed and I would like to raise my score to 7 (Accept). --- Rebuttal Comment 2.1: Comment: We thank the reviewer for their valuable comments and suggestions. We will include a table comparing the initialization strategies alongside the table comparing different kernels in the updated version of the manuscript. We also agree that extending our simulations to include simpler tasks in our numerical study would help us demonstrate the performance gap between KIO and IO across varying levels of task complexity. We will add a discussion regarding why IO performs uniformly lower than KIO in the updated version of the manuscript. Based on the reviewer's suggestions, we will make these numerical extensions part of our future work to provide a more comprehensive study of KIO. Just a side note regarding the grade: It appears to us your grade has not yet been updated in the system (it is still 5, whereas we can see the updated grades of the other two reviewers). We are not sure whether there is a system issue or if you plan to change it later. We wish to bring it to your attention just in case.
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and insightful feedback. We have addressed the reviewer's questions by providing additional information and simulation results. In summary: ## Kernel comparison table Based on feedback from all reviewers, we have extended our numerical results to evaluate our proposed model using different kernels commonly employed in machine learning, namely, the Gaussian (RBF), Laplacian, and linear kernels. These results are shown in the table below. | Task | RBF | Laplace | Linear | |--------------------|-------------|------------|--------------| | hopper-medium | 0.51±0.064 | 0.41±0.051 | 0.024±0.0005 | | hopper-expert | 1.099±0.004 | 0.71±0.267 | 0.028±0.005 | | walker2d-medium | 0.72±0.14 |0.43±0.282 |-0.0019±6e-5 | | walker2d-expert | 1.091±0.003 |1.031±0.226 |-0.0002±0.001 | | halfcheetah-medium | 0.324±0.122 |0.052±0.105 | -0.008±0.006 | | halfcheetah-expert | 0.788±0.249 |0.591±0.355 | 0.020±0.031 | Please note that the results for the RBF kernel in the table above differ slightly from those in the manuscript. This variance is due to a different partition of the dataset used for normalizing the states. ## Noisy data table In response to the question from reviewer o6JR, we conducted an additional experiment in the hopper-medium environment. We introduced two types of zero-mean noise to the actions in the dataset to assess the performance of KIO with the RBF kernel. The noise applied has a standard deviation that is a factor of the standard deviation of the actions in the dataset, as shown in the table below. | Noise std ratio | Gaussian | Uniform | |--------------------|-------------|------------| | 0.00 | 0.51±0.064 | 0.51±0.064 | | 0.05 | 0.499±0.050| 0.457±0.063| | 0.10 | 0.442±0.048| 0.372±0.026| The results indicate that KIO with the RBF kernel is more robust to Gaussian noise than uniform noise with the same standard deviation. ## Dataset size In Table 1, we have used the scores reported by the respective papers (references [17] and [20] in our paper) for each task. These works employ the entire dataset, which consists of 1 million samples per task, as described in the D4RL paper (reference [16] in our paper). - - - More details are given in the individual replies to each reviewer. We hope that we have adequately addressed all concerns raised by the reviewers, and we remain open to further suggestions that may enhance the quality of the manuscript.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LG-CAV: Train Any Concept Activation Vector with Language Guidance
Accept (poster)
Summary: This paper proposed a LG-CAV model that leverage the pretrained vision language model to train CAV without label. This includes a concept ensemble model that employ data augmentation on concept text, a DSR module that optmize the selection of probe image and a model to align prediction of class to concepts called ASR. Experiments showed that this framework achives higher CAV quality. Strengths: The methodology is carried out clearly. The problem is important and the authors have got some good results. The experiments show its proposed model’s effectiveness. Weaknesses: 1.Since LG-CAV leverage pretrained model, I am wondering whether this framework can handle unseen class besides just supervised setting such as novel category detection/generalized category detection. For example, instead of pick 40 classes from ImageNet, could you use 20 for training and the other 20 for testing? 2. From figure 5 we can see that not all concepts are related to the input sentence. I am curious how the similarity threshold is selected. 3. Evaluation of concept-to-class accuracy will need human evaluation of concept. How this is done in detail? 4. From Table 4, the improvement compare with LG-CAV and other baseline is limited. Technical Quality: 3 Clarity: 3 Questions for Authors: see the weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for dedicating your time and effort to providing valuable suggestions for this paper! We will rigorously revise the paper based on your review! > **Weakness 1.** Since LG-CAV leverage pretrained model, I am wondering whether this framework can handle unseen class besides just supervised setting such as novel category detection/generalized category detection. For example, instead of pick 40 classes from ImageNet, could you use 20 for training and the other 20 for testing? **Response:** The LG-CAVs trained on one dataset can be transferred to another dataset, by utilizing the knowledge from the pre-trained VL model. We verify this on a more challenging setting by transferring the LG-CAVs trained on ImageNet to the Stanford Dogs dataset and the CUB-200-2011 dataset, instead of splitting the 40 classes into 20 training classes and 20 test classes (since the used backbones pre-trained on ImageNet have access to the total 40 classes). Specifically, we train the LG-CAVs for each class of Stanford Dogs and CUB-200-2011, using ImageNet data as probe images. Next, we use each trained LG-CAV as the weight of a binary classifier, and test its classification accuracy on the test images from the corresponding class and the same number of test images from other classes. Table 1 and Table 2 demonstrate the average classification accuracy of the LG-CAVs trained using only ImgeNet data can achieve 80% to 90%, without a large performance gap with the LG-CAVs trained on the original datasets. Table 1: LG-CAV accuracy on Stanford Dogs with different training data. |Training Data|ResNet18|DenseNet121|ViT-B| |:-----|:-----|:-----|:-----| |ImageNet|84.76|87.68|93.73| |Stanfords Dog|86.97|94.02|97.05| Table 2: LG-CAV accuracy on CUB-200-2011 with different training data. |Training Data|ResNet18|DenseNet121|ViT-B| |:-----|:-----|:-----|:-----| |ImageNet|77.53|80.41|85.53| |CUB-200-2011|80.86|84.52|87.60| > **Weakness 2.** From figure 5 we can see that not all concepts are related to the input sentence. I am curious how the similarity threshold is selected. **Response:** Actually, Figure 5 demonstrates the high activation values of a trained LG-CAV on the left images and the low activation values on the right images, which indicates that the trained LG-CAV can discriminate whether the images are related to the target concept. We will add more annotation text to this figure for clarification in the revised paper. > **Weakness 3.** Evaluation of concept-to-class accuracy will need human evaluation of concept. How this is done in detail? **Response:** Actually, the evaluation of concept-to-class accuracy requires no human evaluation. Instead, we follow the previous work CLIP-Dissect [1] to utilize a pre-trained language model to determine the similarity between a concept and a class, and select the concept-class pairs with high similarities as ground-truth. Next, for each selected concept-class pair, the concept-to-class accuracy of the corresponding CAV is evaluated according to the similarity between the CAV and the corresponding class in the target model. > **Weakness 4.** From Table 4, the improvement compare with LG-CAV and other baseline is limited. **Response:** Thanks for pointing out this issue. In Table 4, our model correction method freezes the backbone of the model and only trains the final classification layer, verifying that LG-CAV can mitigate the spurious correlation problem for performance improvement with minimal training cost. This paves the way for training LG-CAVs for the middle layers of the backbone, and using them to supervise the training of backbone for more performance improvement in the future. [1] Oikarinen et al., Clip-dissect: Automatic description of neuron representations in deep vision networks, ICLR 2023.
Summary: This paper proposes LG-CAV, a method to train Concept Activation Vectors (CAVs) for any concept without labeled image data, leveraging knowledge from pre-trained vision-language models like CLIP. The authors introduce several techniques to improve CAV quality, including Gaussian alignment, concept ensemble, and deviation sample reweighting. They also propose using the trained LG-CAVs for model correction to improve classification performance. Experiments demonstrate superior CAV quality and model correction results compared to existing methods across multiple datasets and model architectures. Strengths: 1. This paper is well written. The framework is integral, and every component is clearly described in detail. The whole pipeline is easy to follow. 2. The authors propose a novel method with feature alignment loss functions and neural modules to bridge the gap between pre-trained vision-language models and the target classification model. 3. The experimental results show the proposed method achieves the best performance. 4. This paper introduces two new metrics (concept accuracy and concept-to-class accuracy) to evaluate CAV quality. These two metrics are reasonable. Weaknesses: 1. Some of the proposed modules (e.g., Gaussian alignment) seem heuristic. It would be better to give some theoretical justifications. 2. The method uses a set of probe images, but it's not clear how to ensure that the trained LG-CAVs generalize beyond these images. Technical Quality: 3 Clarity: 4 Questions for Authors: No specific questions. This work is of high completeness. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: No specific limtations. This work is of high completeness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for investing your time and effort in offering valuable suggestions for this paper! We will rigorously revise the paper based on your review! > **Weakness 1.** Some of the proposed modules (e.g., Gaussian alignment) seem heuristic. It would be better to give some theoretical justifications. **Response:** Thanks for the advice! The theoretical analysis on our proposed modules are as follows: * Gaussian alignment module. We first prove here that the mathematic expectation of the deviation between two trained LG-CAVs is positively correlated with the deviation between the distribution of activation values for their training (with Gaussian distribution as an example), indicating that misaligned activation distributions seriously disrupt the LG-CAV training. Furthermore, we have provided proof of how GA module aligns the distribution of activation values from VL model to the target model, in Section A of the Appendix. **Definition:** Suppose ${\rm Act}_1^{\rm gt},{\rm Act}_2^{\rm gt},...,{\rm Act}_M^{\rm gt}$ are the ground-truth activation values of $M$ probe images for the target model, which follow the Gaussian distribution $\mathcal{N}(\mu _ {\rm gt}, \sigma _ {\rm gt}^2)$. $v_c\in\mathbb{R}^{\rm dim}$ is the LG-CAV, ${\rm Act}_i^{v_c}$ is the activation value of $v_c$ on the $i$-th probe image, and the loss function is $\mathcal{L} _ {\rm gt}=\frac{1}{M}\sum _ {i=1}^{M}({\rm Act}_i^{v_c}-{\rm Act}_i^{\rm gt})^2$. Besides, the deviated activation values ${\rm Act}_1^{\rm shift},{\rm Act}_2^{\rm shift},...,{\rm Act}_M^{\rm shift}$ and loss $\mathcal{L} _ {\rm shift}$ are defined likewise. $v_c^{\rm gt}$ and $v_c^{\rm shift}$ are the LG-CAVs trained with these two losses, respectively. $v[k]$ denotes the $k$-th element of a vector $v$. **Theorem:** For each element $k$, the mathematic expectation of $(v_c^{\rm shift}-v_c^{\rm gt})[k]$ is positively correlated with $\mu_{\rm shift}-\mu _ {\rm gt}$. *Proof:* $$ \frac{\partial\mathcal{L} _ {\rm gt}}{\partial\mathcal{v_c}}=\frac{\partial\frac{1}{M}\sum_{i=1}^{M}({\rm Act}_i^{v_c}-{\rm Act}_i^{\rm gt})^2}{\partial\mathcal{v_c}} = \frac{2}{M}\sum _{i=1}^{M}({\rm Act}_i^{v_c}-{\rm Act}_i^{\rm gt})\cdot\frac{\partial{\rm Act}_i^{v_c}}{\partial v_c}. $$ Gaussian distribution has such properties: * For each $x \sim \mathcal{N}(\mu, \sigma^2)$ and two constants $a$ and $b$, $a x + b \sim \mathcal{N}(a \mu + b, a^2 \sigma^2)$. * For each $x_1 \sim \mathcal{N}(\mu_1, \sigma_1^2)$ and $x_2 \sim \mathcal{N}(\mu_2, \sigma_2^2)$, $x_1 + x_2 \sim \mathcal{N}(\mu_1 + \mu_2, \sigma_1^2 + \sigma_2^2)$. Next, at each step of gradient descent, $v_c^{\rm gt}$ is updated as: $v_c^{\rm gt} = v_c - \gamma \frac{\partial \mathcal{L}_{\rm gt}}{\partial \mathcal{v_c}}$, and $v_c^{\rm shift}$ is updated likewise. Therefore, substitute the above formulas into $(v_c^{\rm shift} - v_c^{\rm gt})[k]$, we have: $$ (v_c^{\rm shift} - v_c^{\rm gt})[k] = \frac{2 \gamma}{M} \sum_{i=1}^{M} ({\rm Act}_i^{\rm shift}-{\rm Act}_i^{\rm gt}) \cdot \frac{\partial {\rm Act}_i^{v_c}}{\partial v_c}[k] $$ $$ \sim \mathcal{N}(\frac{2 \gamma}{M} \cdot \sum _ {i=1}^{M} \frac{\partial {\rm Act}_i^{v_c}}{\partial v_c}[k] \cdot (\mu _ {\rm shift}-\mu _ {\rm gt}), \frac{4 \gamma^2}{M^2} \cdot \sum _ {i=1}^{M} (\frac{\partial {\rm Act}_i^{v_c}}{\partial v_c}[k])^2 \cdot (\sigma _ {\rm shift}^2+\sigma _ {\rm gt}^2)) = \mathcal{N}(A \cdot (\mu _{\rm shift}-\mu _{\rm gt}), B \cdot (\sigma _{\rm shift}^2+\sigma _{\rm gt}^2)). $$ Note that $A$ and $B$ are unrelated to the activation distributions, thereby $(v_c^{\rm shift}-v_c^{\rm gt})[k]$ follows a Gaussian distribution, and its mathematic expectation is positively correlated with $\mu_{\rm shift}-\mu_{\rm gt}$. (Some steps are too simplified due to space limit, we will add the full proof into the revised paper.) * Activation sample reweighting module. ASR module allocates higher training weights to the samples with higher activation values on the corresponding LG-CAV. We prove that with this strategy, the class weight in the trained linear classifier will have higher similarity with its corresponding LG-CAV, thus mitigating the spurious correlation problem. **Definition:** $\mathcal{I} _ k = \\{ {x_i} \in \mathbb{R}^{\rm dim} \\}_{i=1}^{N}$ denotes the image features of all $N$ training images of class $k$. $u_k \in \mathbb{R}^{\rm dim}$ denotes the class weight for class $k$ in the linear classifier (total $K$ classes in this classifier), and $z _ {i, k} = \langle x_i, u_k \rangle$ denotes the inner product. $\omega_i$ is the weight calculated from ASR module ($\omega_i > 0$), higher $\omega_i$ indicates that the corresponding LG-CAV is more similar with $x_i$. **Theorem:** The $u_k$ trained with $\mathcal{L} = -\frac{1}{N} \sum_{i=1}^{N} \omega_i \cdot \log \frac{\exp(z_{i, k})}{\sum_{t=1}^{K} \exp(z_{i,t})}$ (weighted cross-entropy loss) is more similar with the corresponding LG-CAV than the $u_k$ trained with $\mathcal{L} = -\frac{1}{N} \sum_{i=1}^{N} \log \frac{\exp(z_{i, k})}{\sum_{t=1}^{K} \exp(z_{i,t})}$ (original cross-entropy loss). We leave out the proof here due to space limit, and will add the proof into the revised paper. > **Weakness 2.** The method uses a set of probe images, but it's not clear how to ensure that the trained LG-CAVs generalize beyond these images. **Response:** Our experiments demonstrate that the probe images with more diverse relation (activation values) with the target concept lead to higher performance of the trained LG-CAVs, because these probe images contain richer knowledge for discriminating the target concept. Therefore, in this work we choose to expand the range of probe images (using ImageNet) to increase their diversity and guarantee the generalization ability. Actually, the range of probe images can be easily enlarged because the unlabeled images can be easily obtained from the Internet.
Summary: The paper introduces Language-Guided Concept Activation Vectors (LG-CAV), a method to train Concept Activation Vectors (CAVs) without labeled data by leveraging pre-trained vision-language models such as CLIP. LG-CAV uses concept descriptions to guide the training of CAVs by aligning the activation values of concept descriptions on a set of probe images. To improve the quality of LG-CAVs, the authors propose three modules: Gaussian Alignment (GA), Concept Ensemble (CE), and Deviation Sample Reweighting (DSR). The paper also introduces an Activation Sample Reweighting (ASR) technique for model correction, which enhances the performance of the target model. Experiments across various datasets and architectures demonstrate that LG-CAV outperforms existing CAV methods in terms of concept accuracy and concept-to-class accuracy. Strengths: 1. The paper is well-written. 2. The use of vision-language models allows for training CAVs without the need for labeled data. 3. The introduction of GA, CE, and DSR modules improves the quality of LG-CAVs. 4. Beyond generating explanations, the method is applied to model correction, leading to improved performance in target models. 5. The results show substantial improvements in both concept accuracy and concept-to-class accuracy compared to existing methods. Weaknesses: 1. The method proposed in this paper does not clearly address the data scarcity problem of the original CAV methods, which was highlighted at the beginning. Although the method is effective, it is not evident why it successfully addresses the data scarcity issue. 2. The method heavily relies on the availability and performance of pre-trained vision-language models like CLIP, which may not always be accessible or optimal for all tasks. 3. The introduction of multiple enhancement modules increases the complexity and computational requirements of the method, which may be a barrier to practical applications. 4. The paper could benefit from a more detailed analysis of scenarios where LG-CAV does not perform well or fails to improve over traditional methods. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Equation 3 uses cosine similarity to calculate the activation values because cosine similarity is invariant to the norms of feature vectors as claimed by the authors. However, this reasoning does not convincingly explain why cosine similarity is necessary in this context. 2. Additionally, in lines 153-155, the authors state that "compared with the original binary classification task for CAV training, the activation values encompass richer information about the extent to which the concepts exist in the images, thus facilitating the training of LG-CAV." I question why richer information can be obtained here. Is there any empirical evidence to support this claim? 3. How does the performance of LG-CAV vary with different types and sizes of pre-trained vision-language models beyond CLIP? 4. What are the computational costs and training times associated with the LG-CAV method compared to traditional CAV methods? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for dedicating your time and effort to provide valuable suggestions for this paper! We will make rigorous revisions to the paper based on the review! > **Weakness 1. Why it successfully addresses the data scarcity issue.** **Response:** Previous methods can only train the CAV for the target concept using the **human-collected** positive & negative images. Our method tackles the data scarcity issue by transferring the abundant concept knowledge from CLIP model to LG-CAV for the target model. Specifically, the LG-CAV is trained by learning the similarity scores of CLIP model on a pool of **unlabeled images** (which can be easily obtained from the Internet). For example, for a concept named ``a cat animal with orange stripes'', CLIP model can calculate the similarity score of this concept on the unlabeled images, by comparing the text features and the image features. The images that are closer to this concept will be assigned a higher similarity score, owing to the excellent cross-modal ability of CLIP model. After learning these similarity scores, the LG-CAV also has the capacity to discern the similarity between an image and this concept, as shown in Figure 5 (A) of the main paper. > **Weakness 2. Relies on CLIP model.** **Response:** CLIP model has demonstrated its excellent cross-modal ability in numerous tasks. By transferring the ability of CLIP model to the target model, our experiments have verified that our method can be applied to universal concepts, e.g., the concepts from the ImageNet dataset and the Broden dataset. Although current CLIP models may have some limitations, the experiments in Section B.5.1 of the Appendix show that our method can be generalized to different types of CLIP models, indicating that it has great potential to be adapted with the next-generation CLIP models in the future. > **Weakness 3. Computation cost.** **Response:** Actually, our method only uses an alignment loss function to learn the activation values from VL model, without adding new parameters. Besides, the coefficients of the added modules (*e.g.*, the sample weights of ASR module) can be calculated **only once** before training, requiring no redundant calculation at each step of training and saving the training cost. As shown in Table 1, compared with the original CAV, our method increases only 6% to 7% training time but significantly improves the CAV quality. Table 1: Training time for 468 concepts in the Broden dataset on one A800 GPU. ||ResNet18|DenseNet121|ViT-B| |:-----|:-----|:-----|:-----| |Original CAV|6.89 Hours|8.68 Hours|8.22 Hours| |LG-CAV|7.38 Hours (+7.11%)|9.22 Hours (+6.22%)|8.72 Hours (+6.08%)| > **Weakness 4. Failure cases.** **Response:** Our work may fail in some special datasets like MNIST, as CLIP model achieves only 88% zero-shot image classification accuracy on MNIST. Nevertheless, our method can be applied to universal images, and has the ability to be adapted to the next-generation CLIP models with better performance. > **Question 1. Why cosine similarity is necessary.** **Response:** Due to the difference in feature dimensions between the target model and VL model, other metrics (e.g., Euclidean distance) can cause significant variation in the similarity scores between features from the target model and CLIP model, making them unable to compare and resulting in poor performance. Cosine similarity naturally mitigates this problem with a normalization operation to constrain the similarity scores from different models within [-1, 1], making them comparable. Besides, the existing CLIP models also use cosine similarity to compute the similarity between text and image features. We follow this metric to maintain the performance of the CLIP model. As shown in Table 2, cosine similarity achieves better performance than the other two metrics on three backbones. Table 2: Concept accuracy & concept-to-class accuracy in the Broden dataset with different metrics. ||ResNet18|DenseNet121|ViT-B| |:-----|:-----|:-----|:-----| |Cosine Similarity|77.45 & 24.58|79.07 & 23.93|70.52 & 26.12| |Euclidean Distance|56.99 & 4.29|57.42 & 5.24|60.38 & 4.86| |Pearson Correlation|76.25 & 21.22|77.87 & 22.79|69.77 & 24.29| > **Question 2. Why richer information can be obtained here.** **Response:** This method shares a similar theory with Knowledge Distillation (KD). In KD, the student model learns the classification logits (soft label) of an input image from the teacher model instead of the original hard label, because soft label encompasses richer information about the similarity of the image with every class. Likewise, the activation value in our method indicates the similarity of the image with the target concept, with richer information than only assigning a positive or negative label to the image in traditional CAV. To verify this, we conduct an experiment by setting the activation values above a high threshold (0.8) to 1, and other activation values are set to -1. As shown in Table 3, the loss of the similarity information leads to worse performance. Table 3: Concept accuracy & concept-to-class accuracy in the Broden dataset with/without similarity information (hard label). ||ResNet18|DenseNet121|ViT-B| |:-----|:-----|:-----|:-----| |Original|77.45 & 24.58|79.07 & 23.93|70.52 & 26.12| |Hard Label|75.28 & 20.69|76.40 & 17.04|68.10 & 16.66| > **Question 3. Different CLIP models.** **Response:** We have provided the performance of our method with different VL models in Section B.5.1 of the Appendix. Overall, the VL models with higher zero-shot image classification accuracy lead to higher performance on LG-CAV. > **Question 4. Computation cost.** **Response:** The training time compared between our method and the original CAV is shown in Table 1. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. After carefully reviewing your clarifications, I believe my initial concern has been addressed. As a result, I will increase my score.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
EDT: An Efficient Diffusion Transformer Framework Inspired by Human-like Sketching
Accept (poster)
Summary: Transformer-based Diffusion Probabilistic Models (DPMs) have shown great potential in image generation tasks but are often hindered by extensive computational requirements. This paper introduces the Efficient Diffusion Transformer (EDT) framework to address these computational challenges. The EDT framework features a lightweight diffusion model architecture and incorporates a classifier-free Attention Modulation Matrix inspired by human-like sketching. Additionally, the authors propose a token relation-enhanced masking training strategy tailored explicitly for EDT to improve token relation learning. Extensive experiments demonstrate that the EDT framework significantly reduces both training and inference costs while surpassing the performance of existing transformer-based diffusion models in image synthesis. The paper highlights the effectiveness of the EDT framework in achieving a balance between computational efficiency and high performance in image generation tasks. The proposed methods and strategies demonstrate potential for broader applications and future research in the optimization of transformer-based models. Strengths: 1. The paper introduces a novel way of approaching Transformer-based Diffusion Probabilistic Models, especially a classifier-free approach. The paper also attempts to understand the relationship between tokens, thereby making the pipeline efficient. 2. The pipeline is robust and lightweight. The paper introduces Attention Modulation matrix, that provides specific quantitative results for where to focus and how much to focus on an image. The authors have provided visual results on how this method has improved the quality of the images. Weaknesses: 1. The pre-trained VAE might be trained on a classifier task, that might have some impact on categorization on certain classes. 2. The paper does not mention about the number of epochs or an estimate of time required by EDT to be able to produce this result. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Is there any specific reason to use a VAE? There are other lightweight image feature extractor models available, that might make the pipeline even more efficient. 2. The experimental validation, while thorough, might benefit from a broader range of datasets to ensure the robustness and generalizability of the EDT framework across different image generation tasks. 3. Please add potential limitations of the work. Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper is an extremely interesting read. The paper talks about the trade-off between computational resources and performance. However, there is no specific section dedicated, where the potential limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback and insightful comments. We will answer your questions in the following: --- ***1. The usage of pre-trained VAE.*** * **In the field of Latent Diffusion Models, pre-trained VAE is a commonly used model.** When training diffusion models or generating images, the pre-trained VAE remains frozen. The VAE encoder is used to encode images into latent representations and the VAE decoder is used to decode latent representations back into images. --- ***2. Training cost of EDT.*** * We reported the training cost and speed **in Table 1 in the main paper**. For ImageNet 256×256, we estimated the training cost for EDT, MDTv2, and DiT on a 48GB-L40 GPU **in Rebuttal-Table 3 in the global rebuttal**, using a batch size of 256 and FP32 precision. **EDT achieves the best performance with a low training cost.** (GPU days refer to the number of days required for training on a single L40 GPU.) | Model|Epochs|Training images|GPU days| FID| |---|---|---|---|---| |**EDT-S** |80|102M|**2.75**|**38.73**| |DiT-S|80|102M|2.96|68.40| |MDTv2-S|80|102M|16.47|39.50| |**EDT-B**|80|102M|9.19|**19.18**| |DiT-B|80|102M|**8.62**|43.47| |MDTv2-B|80|102M|26.17|19.55| |**EDT-XL**|80|102M|**37.79**|**7.52**| |DiT-XL|80|102M|39.82|19.47| |MDTv2-XL|80|102M|72.62|7.70| **Rebuttal-Table 3**: The training cost and FID of EDT, DiT, and MDTv2 on ImageNet 256×256 with batch size of 256 and FP32 precision. --- ***3. Why use VAE.*** * VAE is used to reduce the computational cost of diffusion models. The diffusion model generates images through multiple iterations, whose input in each iteration is its output in the previous iteration. To reduce the computational cost of each iteration, we make the diffusion model train and predict in latent space, since the feature size in latent space is smaller than that in the pixel space of images. During inference, the diffusion model is used to generate the latent representations of images, and the VAE decoder is used to decode latent representations back into images. --- ***4. Experiments on extra dataset celebA-HQ.*** * We conducted a new experiment on CelebA-HQ 256×256 for the unconditional image synthesis task. We train EDT-S, DiT-S, and MDTv2-S on CelebA-HQ 256×256 under the same training epochs with the default settings in their papers, respectively. **As shown in Rebuttal Table 2, EDT-S achieved the lowest FID, demonstrating that EDT is also effective in unconditional image synthesis tasks.** |Model|Training images|FID| |---|---|---| |EDT-S|100k * 240|**16.60**| |DiT-S|100k * 240|19.12| |MDTv2-S|100k * 240|17.34| **Rebuttal-Table 2**: Evaluation of unconditional image synthesis on CelebA-HQ. --- ***5. Potential limitations.*** * The experiments were conducted on class-conditional image synthesis tasks and unconditional image synthesis. Future work will explore other image synthesis tasks such as text-to-image tasks. * When integrating AMM into a pre-trained model, the insert and arrangement of AMM blocks varies across different models. Identifying the optimal placement and configuration of AMM requires testing to fully realize its potential. * Although significant progress has been made with AMM, the generation function of AMM still has room for improvement and warrants further exploration. --- Rebuttal Comment 1.1: Comment: Thank you for the insightful discussion! --- Reply to Comment 1.1.1: Comment: Thank you for your review! Please let me know if you have any questions or concerns.
Summary: This paper introduces the Efficient Diffusion Transformer (EDT) framework to address the high computational requirements of transformer-based Diffusion Probabilistic Models (DPMs). The EDT framework features a lightweight diffusion model architecture, a training-free Attention Modulation Matrix inspired by human sketching, and a token relation-enhanced masking training strategy. Extensive experiments show that EDT reduces training and inference costs while surpassing existing transformer-based diffusion models in image synthesis performance. For example, EDT-S achieves a lower FID score of 34.27 and offers significantly faster training and inference speeds compared to MDTv2-S. Strengths: + The proposed network can reduce the computation of diffusion is a plus point although it requires more parameters compared to existing methods. + Some experimental results on ImageNEt 256x256 show promising looks. Weaknesses: **[Update]: after rebuttal, I raised score from 4 to 5** The proposed method shows some good merits in the architectural design, however, it remains several major concerns: + Most results are reported with not very optimal settings. Specifically, classifier-free guidance is the standard practice, but it is omitted, and cannot show the advantage of the proposed method. This results in the not-very convincing evaluation that the proposed method is superior to the existing approaches with SOTAs (DiT, MDT, RDM [1], etc). + More datasets (or resolutions, such as 512) need to be conducted to have a more conclusive finding for the effectiveness of the proposed method as the proposed method is lightweight, it should not be a big deal with a higher resolution training. [1] Relay Diffusion: Unifying diffusion process across resolutions for image synthesis, ICLR 2024 Technical Quality: 3 Clarity: 2 Questions for Authors: My main concerns are listed in the weaknesses of its evaluation of the CFG setting and other datasets. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes, they discussed it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and suggestions. We will answer your questions in the following: --- ***1. The experiment under optimal CFG settings.*** * According to DiT and MDTv2, their optimal CFG settings are 1.5 and 3.8, respectively. Based on our experimental exploration, the optimal Cfg for EDT is 2. **As shown in Rebuttal-Table 4**, when compared under their optimal CFG settings with EDT and at the same training cost, **EDT-S achieves the best performance with the lowest FID**. | Models|Training images|FID| |---|---|---| |EDT-S CFG=2|400k*256|**9.89**| |MDTv2-S CFG=3.8|400k*256|15.62| |DiT-S CFG=1.5|400k*256|21.03| |DiT-XL CFG=1.5|400k*256|5.50| |EDT-XL CFG=2|400k*256|**4.65**| **Rebuttal-Table 4**: Comparison and evaluation under optimal Cfg on ImageNet 256×256. --- ***2. Experiments on extra dataset CelebA-HQ.*** * We conducted a new experiment on CelebA-HQ 256×256 for the unconditional image synthesis task. We train EDT-S, DiT-S, and MDTv2-S on CelebA-HQ 256×256 under the same training epochs with the default settings in their papers, respectively. **As shown in Rebuttal Table 2, EDT-S achieved the lowest FID, demonstrating that EDT is also effective in unconditional image synthesis tasks.** Due to the limited time, we did not finish the experiments with the 512x512 resolution training. We will include it in the revision. |Model|Training images|FID| |---|---|---| |EDT-S|100k * 240|**16.60**| |DiT-S|100k * 240|19.12| |MDTv2-S|100k * 240|17.34| **Rebuttal-Table 2**: Evaluation of unconditional image synthesis on CelebA-HQ. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. It partly resolved my concerns, however, still remained. First, putting some numeric numbers is not very convincing. In the submission (including the appendix), the qualitative results (generated samples) are absent and it is very important to see the generation tasks to support their quantitative numbers. In addition to the missing comparison with other methods on 512 resolution, the author heavily referred to and compared with MDTv2 but they only focused on the earlier convergence of training steps. In MDTv2 they compared with baseline DiT on both earlier (e.g. 400k steps) and full convergence (4600k steps) and MDTv2 reached their best FID 1.58 on ImageNet 256x256. The authors claim the efficiency of the method with much lower computation cost, which raises the question of why they ignored their optimal and final results as their models are trained until fully converged. Can it beat the best FID of MDTv2? Qualitative results comparison to support their numerical number should also be carefully prepared for both their optimal generated images and the comparison of reconstructed images using the masking strategy of MDT and their masking strategy (see MDTv1 ICCV version for the referred image reconstruction with masking). I expect that the reconstructed output of the proposed method with their masking is better than MDT. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your valuable suggestions regarding the comparative experiments and qualitative analysis of the masking training strategy of EDT. These additions make the results more understandable and convincing than relying solely on numerical data. --- ***1. Qualitative results comparison of inpainting images using models trained by the masking training strategy of MDT and EDT.*** After following your advice regarding the qualitative results of masking training and referring to MDTv1, we conducted inpainting experiments using EDT-S under different masked ratios. We utilized two versions of the pre-trained EDT-S models, each employing the masking training strategy of MDT and EDT, respectively. We applied various mask ratios to the images and used these two models to inpaint the masked areas. **When the mask ratio reached 50%, the EDT-S model using MDT's masking strategy struggled to reconstruct the images, while the EDT-S model using EDT's masking strategy was still able to do so.** We plan to include this visual comparison in the appendix. --- ***2. The experiment of EDT-XL for full convergence.*** Our primary focus has been on the efficiency of diffusion transformers. Our proposed EDT offers the community a more efficient diffusion model architecture, which achieves speed-ups of 2.29x, 2.29x, and 2.22x in inference speed on small, base, and xlarge sizes respectively. Unfortunately, training an EDT-XL for 4600k iterations would require approximately two months for one round of training, and we could not complete this due to resource limitations. Nonetheless, we have validated that EDT maintains acceptable accuracy while significantly improving speed within our capacity. **Training diffusion model is exceptionally costly, and we are currently training an EDT-XL for 2000k iterations, with plans to include the relevant results in the updated version of the paper.** --- ***3. Experiment on different datasets to evaluate the effectiveness of the EDT.*** Unfortunately, because of time and resource limitations, we were unable to conduct experiments at 512 resolution. Instead, we performed experiments on a different dataset (CelebA-HQ), where EDT-S showed competitive results. --- In addition, please note that the masking training strategy in our work is just one part of what we have contributed. It aims to mitigate the performance loss of the efficient diffusion transformer architecture. **Our contributions also encompass the model architecture design and the AMM plugin. These modules are not proposed by the current DiT works, and we believe they can offer new insights for developing this field.** --- Rebuttal 2: Comment: Thank you for your helpful suggestions and for providing a method for submitting qualitative images. We have submitted two sets of qualitative images at [https://postimg.cc/gallery/ZS1FXcb](https://postimg.cc/gallery/ZS1FXcb). --- ***Figure 1 is a qualitative analysis of the AMM plugin.*** AMM is a train-free plugin that can be inserted into pre-trained models to improve image generation performance. We compared images generated by EDT-XL with and without AMM to qualitatively analyze the effect of AMM on image generation. In Figure 1, the red boxes highlight the unrealistic areas in the images generated by EDT-XL without AMM. **In the corresponding areas of the images generated by EDT-XL with AMM, the results appear more realistic.** Moreover, the parrot image generated by EDT-XL without AMM is realistic and the parrot image generated by EDT-XL with AMM still remains equally realistic. Therefore, **adding AMM does not negatively affect the original quality.** This visual analysis demonstrates the effectiveness of the AMM plugin. --- ***Figure 2 is a qualitative results comparison of reconstructing masked images using models trained by the masking training strategy of MDT and EDT.*** We conducted inpainting experiments using EDT-S under different masked ratios. We utilized two versions of the pre-trained EDT-S models, each employing the masking training strategy of MDT and EDT, respectively. In Figure 2, We applied various mask ratios to the images and used these two models to reconstruct the masked areas. **When the mask ratio reached 50%, the EDT-S model using MDT's masking strategy struggled to reconstruct the images, while the EDT-S model using EDT's masking strategy was still able to do so. EDT trained by EDT's masking training strategy demonstrates better image reconstruction capabilities.** This visual analysis demonstrates the effectiveness of the EDT's masking training strategy. --- Rebuttal Comment 2.1: Comment: Please note that we have highlighted the difference between the samples, as suggested by reviewer FhNv, and edited the comment above.
Summary: The Efficient Diffusion Transformer (EDT) framework is developed, featuring a lightweight architecture designed based on thorough computational analysis. Inspired by human sketching, EDT alternates between global attention and location attention. Additionally, the Attention Modulation Matrix enhances the detail of generated images in pre-trained diffusion transformers without requiring extra training. A novel token masking training strategy is proposed to improve the token relation learning capability of EDT. EDT achieves a new state-of-the-art performance and faster training and inference speeds compared to existing models like DiT and MDTv2. A series of exploratory experiments and ablation studies were conducted to analyze and identify the key factors affecting EDT's performance. Strengths: Novelty: The three innovative techniques introduced for designing efficient diffusion transformers represent a significant advancement in the field. These techniques, which include the development of a lightweight architecture, the introduction of the Attention Modulation Matrix, and a novel token masking training strategy, are unique contributions that set this work apart from existing approaches. Significance: The challenge of designing efficient diffusion transformers for training and inference is a critical issue in the development of diffusion models. The work’s focus on enhancing efficiency without compromising performance is particularly relevant in the context of growing model sizes and the demand for faster computational methods. Methodology: The proposed algorithm is well formulated and clearly explained. The approach includes a comprehensive computational analysis to design a lightweight diffusion transformer architecture, inspired by human sketching with an alternation process between global attention and location attention. The introduction of the Attention Modulation Matrix is particularly noteworthy, as it improves image detail in pre-trained diffusion transformers without additional training costs. The novel token masking training strategy enhances the learning ability of the model, demonstrating a sophisticated understanding of token relationships. Results: The experimental results show improvements over existing methods such as DiT and SD-DiT. A series of exploratory experiments and ablation studies further validate the robustness and effectiveness of the proposed techniques, providing a detailed analysis of the key factors influencing EDT's performance. Weaknesses: 1. The training dataset is limited to ImageNet, which may not fully represent the broader applicability of the method. 2. The Attention Modulation Matrix (AMM) could be tested on text-to-image models, such as the Pixart Series, to demonstrate the wider applicability of the proposed method. 3. The motivation behind the lightweight design is not entirely clear. It seems to balance between UNet and transformer architectures but appears more similar to the UNet. 4. The computational process of the AMM would benefit from additional figures or diagrams to enhance understanding. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why do long skip connection modules inevitably lead to a loss of token information? From my perspective, long skip connections represent a fundamental difference between UNet and transformer architectures, potentially enhancing the training process by facilitating information flow across layers. If these connections are perceived as a drawback, wouldn't it be more effective to adopt the vanilla transformer architecture, which typically avoids such connectivity patterns? 2. What is the additional computational cost incurred by integrating the Attention Modulation Matrix into pre-trained DiT models? Understanding the impact on computational resources is crucial for assessing the feasibility and scalability of implementing this enhancement across different model configurations and training scenarios. 3. Can you provide a detailed comparison of the training costs associated with your proposed method versus other relevant approaches? Analyzing the computational overhead, training time, and resource requirements relative to existing methods will provide valuable insights into the practical advantages and trade-offs of adopting your approach for training and deploying diffusion transformers. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The evaluation and application of the proposed method is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments and suggestions. We will address your concerns in the following answers: --- ***1. Experiments on extra dataset CelebA-HQ.*** * We conducted a new experiment on CelebA-HQ 256×256 for the unconditional image synthesis task. We train EDT-S, DiT-S, and MDTv2-S on CelebA-HQ 256×256 under the same training epochs with the default settings in their papers, respectively. **As shown in Rebuttal Table 2, EDT-S achieved the lowest FID, demonstrating that EDT is also effective in unconditional image synthesis tasks.** |Model|Training images|FID| |---|---|---| |EDT-S|100k * 240|**16.60**| |DiT-S|100k * 240|19.12| |MDTv2-S|100k * 240|17.34| **Rebuttal-Table 2**: Evaluation of unconditional image synthesis on CelebA-HQ. --- ***2.Motivation for lightweight design.*** * **Our motivation for a lightweight design is to decrease the computational overhead by reducing the number of tokens in the intermediate blocks.** We achieve this by using down-sampling modules to compress the tokens. This idea is similar to U-Net, but we implemented it to the transformer-based diffusion models. **However, while compressing tokens reduces the computational overhead of EDT, it also compromises token information. To address this issue, we have enhanced the down-sampling modules and long skip connection modules,** incorporating techniques such as ''token information enhancement'' and ''positional encoding supplement''. **As shown in Table 4 of the Appendix,** the effectiveness of these improvements is demonstrated through ablation experiments. --- ***3. The computational process of the AMM.*** * **In Rebuttal-Figure 2 in the PDF of global rebuttal**, we illustrate the process of attention modulation. Image can be split into N patches and each token is the feature of a patch. Each token (patch) corresponds to a rectangular area of the image and has a corresponding 2-D coordinate (x, y) in the image. We calculate an Euclidean distance value $d$ for each pair of tokens, resulting in a distance matrix, which is an N×N tensor. Based on the distance matrix, we generate modulation values $m$ via the modulation matrix generation function $F(d)$, which assigns lower modulation values to tokens that are farther apart. These modulation values form an Attention Modulation Matrix (AMM), another N×N tensor. **Importantly, we integrate the AMM into the pre-trained EDT without any additional training.** The attention modulation matrix is calculated when the model is instantiated. During inference, the modulated attention score matrix is obtained by performing a Hadamard product between the attention modulation matrix and the attention score matrix. --- ***4. The computational cost of AMM.*** * **The addition of AMM introduces minimal computational costs.** Firstly, AMM can be incorporated into a pre-trained model without requiring additional fine-tuning, resulting in no additional training costs. Secondly, the increased computational cost of AMM during inference is negligible. For instance, in the last block of EDT-XL, the attention score matrix and the Attention Modulation Matrix are both 18×256×256 tensors. The computational cost of the Hadamard product between the attention score matrix and AMM is only 1.18M FLOPs for multiplication calculations, out of a total of 2819.3M FLOPs for the block. **This amounts to merely 0.04% of the total FLOPs, making the computational cost of AMM negligible.** In our experiments, we added 5 AMM modules to a pre-trained DiT-XL model, and the FID score decreased from 18 to 14. --- ***5. The loss of token information in long skip connection.*** * Long skip connection modules are crucial for the performance of diffusion models, but the token merging operations within these modules can result in the potential loss of token information. Before the long skip connection, we have two sets of tokens, each represented as an N × D tensor. During the long skip connection, these two sets of tokens are concatenated into an N × 2D tensor and then merged into an N × D tensor through a linear layer. **This dimensionality reduction from 2D to D leads to a loss of token information and disrupts positional information.** To address this issue, we introduced Token Information Enhancement and Positional Encoding Supplement within the long skip connections, as demonstrated **in Table 4 of the Appendix**. --- ***6. Training cost of EDT.*** * We reported the training cost and speed **in Table 1 in the main paper**. For ImageNet 256×256, we estimated the training cost for EDT, MDTv2, and DiT on a 48GB-L40 GPU **in Rebuttal-Table 3 in the global rebuttal**, using a batch size of 256 and FP32 precision. **EDT achieves the best performance with a low training cost.** (GPU days refer to the number of days required for training on a single L40 GPU.) | Model|Epochs|Training images|GPU days| FID| |---|---|---|---|---| |**EDT-S** |80|102M|**2.75**|**38.73**| |DiT-S|80|102M|2.96|68.40| |MDTv2-S|80|102M|16.47|39.50| |**EDT-B**|80|102M|9.19|**19.18**| |DiT-B|80|102M|**8.62**|43.47| |MDTv2-B|80|102M|26.17|19.55| |**EDT-XL**|80|102M|**37.79**|**7.52**| |DiT-XL|80|102M|39.82|19.47| |MDTv2-XL|80|102M|72.62|7.70| **Rebuttal-Table 3**: The training cost and FID of EDT, DiT, and MDTv2 on ImageNet 256×256 with batch size of 256 and FP32 precision. --- ***7. Adding AMM to Text-to-Image Models.*** * Due to time constraints, incorporating AMM into text-to-image models will be addressed in our future work. --- Rebuttal 2: Title: Additional qualitative results for both AMM and masking training strategy Comment: As reviewers FhNv and 8pvq suggested, we added qualitative results to visualize our results. --- We highlight the different areas by the red box on the samples. Please refer to the new Link at [https://postimg.cc/gallery/ZS1FXcb](https://postimg.cc/gallery/ZS1FXcb). --- ***1.The difference between the provided samples generated by the models with/without AMM.*** In Figure 1, the red boxes highlight the unrealistic areas in the images generated by EDT-XL without AMM. **In the corresponding areas of the images generated by EDT-XL with AMM, the results appear more realistic.** Moreover, the parrot image generated by EDT-XL without AMM is realistic and the parrot image generated by EDT-XL with AMM still remains equally realistic. Therefore, **adding AMM does not negatively affect the original quality.** This visual analysis demonstrates the effectiveness of the AMM plugin. --- ***2.The difference between the provided samples generated by the models with the proposed masking training strategy and MDT's masking strategy.*** In Figure 2, We applied various mask ratios to the images and used these two models to reconstruct the masked areas during inference. **When the mask ratio reached 50%, the EDT-S model using MDT's masking strategy struggled to reconstruct the images, while the EDT-S model using EDT's masking strategy was still able to do so.** EDT trained by EDT's masking training strategy demonstrated better image reconstruction capabilities. This visual analysis demonstrates the effectiveness of the EDT's masking training strategy. --- Thank you for your valuable and insightful suggestions. We are looking forward to hearing your feedback on EDT. If you have any questions, we are more than willing to provide further clarification and address any issues promptly.
Summary: The paper introduces a new efficient diffusion-based model, namely EDT. First, they revisit the masking strategy proposed in MDT and provide some insights regarding the discrepancy in the training objective. To this end, EDT uses a more efficient masking mechanism that focuses on the main generation task instead of paying more attention to recovering the masked regions. Secondly, the main design is inspired by the human brain, which is nice, where they implement an alternating mechanism to alternate between local and global attention. Strengths: * Alternating between the local and global details inspired by human brains is novel. * Tackling an important application. * The paper's writing is good, which makes the paper easy to follow in general. Weaknesses: * [Methodolgy] The papers tackle an important application. However, I am concerned about the execution way. Designing architecture inspired by human brains is very important; however, the gain seems limited. * [Expirements] It is mentioned in lines 43, 62, and 193 that the AMM could be easily integrated into any existing method. However, there are no experiments to support this claim. This is crucial to show the effectiveness of the proposed module. * [Expirements] Conducting the experiments on only one dataset is insufficient. It is recommended to show the gain of the proposed method on Celeb-HQ and Lsun-Churches datasets. I suggested these two as their scale is considerably small thus conducting these experiments should not be challenging. * [Results] The claims in the abstract related to the efficiency by saying your method achieves 4 x speed up in training is misleading. Despite your clarification, this is only valid for the small variant, still I was expecting significant improvements on the other variants as well, which is not the case. * [Analysis] The analysis regarding the MVD-v2 masking mechanism is shallow. In addition, presenting the loss trends using screenshots of the terminal is shocking. * [Ablations] Ablation studies are very important so differing them to the appendix could be problematic. I recommend saving some space and including the ablations in the main paper. * [Visualization] All the Figures must be at the top of the pages to enhance readability. * [Experiments] The related work is too short and not enough. At least, I was expecting a longer version in the appendix. In addition, some important work related to this is missing. I would suggest the author include Toddler [1] as its motivation, designing an efficient diffusion-based generative model inspired by human brains, which is very similar. [1] Bakr, Eslam Mohamed, et al. "ToddlerDiffusion: Flash Interpretable Controllable Diffusion Model." arXiv preprint arXiv:2311.14542 (2023). Technical Quality: 3 Clarity: 2 Questions for Authors: * Please show the effectiveness of your method on more datasets, as suggested earlier. * Integrate the AMM module into different methods to show its effectiveness as claimed. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: * The limitations are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the professional and insightful comments. We address each comment as follows: --- ***1. Performance improvement by AMM.*** - We demonstrate that AMM is both effective and efficient through several experiments detailed **in Tables 9, 10, and 11 in the Appendix**. Tables 9 and 10 compare model performance with and without AMM across different sizes of EDTs. **Models incorporating AMM achieve lower FID scores and higher IS scores**. For example, Table 9 shows that EDT-S with AMM achieves a lower FID score of 34.2 compared to 46.9 for EDT-S without AMM. **Notably, AMM is integrated into pre-trained EDT-S without additional training**. Table 11 illustrates AMM's effectiveness across different iterations of EDT. Furthermore, AMM and its arrangement in blocks are designed by mimicking the brain's logical process—alternating between global and local attention as observed in human sketching. We will include a detailed ablation study of AMM in the main paper. --- ***2. More models incorporated with AMM.*** - We integrated AMM into pre-trained EDT, DiT, and MDT, which are transformer-based models. We conducted experiments on ImageNet and Celeba-HQ **in Rebuttal-Table 1**. **The models with AMM obtain lower FID compared to the models without AMM**, which further demonstrates the effectiveness of AMM in different models and datasets. |**On ImageNet 256×256 (class-conditional image synthesis)**|||| |---|---|---|---| |Model|Training images|W/o AMM|W AMM| |EDT-S|400k * 256|42.60|**34.20**| |DiT-S|400k * 256|67.16|**63.11**| |MDTv2-S|400k * 256|39.02|**31.89**| |EDT-XL|400k * 256|12.80|**7.52**| |DiT-XL|400k * 256|18.48|**14.73**| |DiT-XL|7000k * 256|9.62|**3.75**| |**On CelebA-HQ 256 × 256 (unconditional image synthesis)**|||| |Model|Training images|W/o AMM |W AMM| |EDT-S|100k * 240|17.01|**16.60**| |DiT-S|100k * 240|19.12|**18.41**| |MDTv2-S|100k * 240|17.34|**17.11**| **Rebuttal-Table 1**: Evaluation of the performance of pre-trained EDT, DiT, and MDT without and with AMM on ImageNet and CelebA-HQ. --- ***3. Experiments on extra dataset CelebA-HQ.*** - We conducted a new experiment on CelebA-HQ 256×256 for the unconditional image synthesis task. We train EDT-S, DiT-S, and MDTv2-S on Celeb-HQ 256×256 under the same training epochs with the default settings in their papers, respectively. **As shown in Rebuttal Table 2, EDT-S achieved the lowest FID, demonstrating that EDT is also effective in unconditional image synthesis tasks.** |Model|Training images|FID| |---|---|---| |EDT-S|100k * 240|**16.60**| |DiT-S|100k * 240|19.12| |MDTv2-S|100k * 240|17.34| **Rebuttal-Table 2**: Evaluation of unconditional image synthesis on CelebA-HQ. --- ***4. Clarification of the speed-ups in training.*** - As shown **in Table 1 of the main paper**, EDT-S, EDT-B, and EDT-XL attain speed-ups of 3.93x, 2.84x, and 1.92x respectively in the training phase, and 2.29x, 2.29x, and 2.22x respectively in inference, when compared to the corresponding sizes of MDTV2. We will provide a more explicit description regarding the speed-up details. --- ***5. Analysis of masking training mechanism.*** - By observing the loss changes **in Rebuttal-Figure 1 in the PDF of the global rebuttal**, we identified a conflict between $L_{masked}$ (loss when the input consists of the remaining tokens after masking) and $L_{full}$ (loss when the input consists of the full token input) in MDTv2. Although MDTv2 also observed similar issues, they did not solve them effectively. In MDTv2, it was found that using only $L_{masked}$ caused the model to overly focus on reconstructing the masked tokens, thereby neglecting diffusion training. To address this, both the full token input and the masked token input were fed to the diffusion model, resulting in the inclusion of both $L_{masked}$ and $L_{full}$ in MDTv2's loss function. However, our work discovered a conflict between $L_{masked}$ and $L_{full}$. We separately applied the masking training strategies of MDTv2 and EDT to train diffusion models and extracted $L_{masked}$ and $L_{full}$ values at the 300k~305k training iterations. **As shown in Rebuttal-Figure 1 in the PDF of the global rebuttal**, we visualized the changes of $L_{masked}$ and $L_{full}$. **The left-top of Rebuttal-Figure 1** depicts the loss changes when using MDTv2's masking training strategy. As $L_{full}$ decreases, $L_{masked}$ increases, and vice versa, illustrating the conflict between these two losses. This conflict arises because $L_{masked}$ causes the model to focus on masked token reconstruction while ignoring diffusion training. **As shown in the bottom-left of Rebuttal-Figure 1, both the $L_{full}$ and $L_{masked}$ hardly converged during the 300k to 400k training iterations.** In our work, to resolve this conflict, we performed masking in the intermediate layer instead of before input. This method forces the model learning to establish relations between tokens before they are masked. **The right side of Rebuttal-Figure 1** shows the loss changes when using EDT's masking training strategy. With EDT's strategy, $L_{masked}$ and $L_{full}$ exhibit synchronized changes, and **the loss values continuously decrease during the 300k to 400k training iterations.** Due to the page limit, the aforementioned discussion will be included in the supplementary material for the camera-ready version. --- ***6. Ablations.*** - We will rearrange the work and place the ablations table in the main paper. --- ***7. Visualization.*** - We will make all figures be placed at the top of the pages to enhance readability. --- ***8. Related work*** - We will improve our related work and provide more comprehensive writing in the appendix, and we will discuss important work such as Toddler in the related work. --- Rebuttal Comment 1.1: Title: Thanks to Authors Comment: I appreciate the author's rebuttal and efforts. After carefully reading the other reviewers' rebuttals and comments, I am more inclined to raise my score to 5 instead of 4. As I have some concerns, as follows: - The qualitative results in response to the reviewer "8pvq" do not show the superiority of the proposed methods. I see no difference between the provided samples. - The quantitative results show some gains but are still not so convincing. For instance, the reported FID on CelebHQ is too high. The diffusion-based models, even the small ones, can easily get around 7 FID on CelebHQ. - I also agree with other reviewers about the importance of showing the effectiveness of the proposed method on higher resolution, e.g., 512. However, I also understand the hardware challenges; thus, I will not put any weight on this. --- Rebuttal 2: Comment: Thank you for your professional and insightful comments. These comments enhance the quality of our work. We appreciate the recognition of our efforts to improve the score. Below are our responses to the follow-up questions. --- We highlight the different areas by the red box on the samples. Please refer to the new Link at [https://postimg.cc/gallery/ZS1FXcb](https://postimg.cc/gallery/ZS1FXcb). --- ***1.The difference between the provided samples generated by the models with/without AMM.*** In Figure 1, the red boxes highlight the unrealistic areas in the images generated by EDT-XL without AMM. **In the corresponding areas of the images generated by EDT-XL with AMM, the results appear more realistic.** Moreover, the parrot image generated by EDT-XL without AMM is realistic and the parrot image generated by EDT-XL with AMM still remains equally realistic. Therefore, **adding AMM does not negatively affect the original quality.** This visual analysis demonstrates the effectiveness of the AMM plugin. --- ***2.The difference between the provided samples generated by the models with the proposed masking training strategy and MDT's masking strategy.*** In Figure 2, We applied various mask ratios to the images and used these two models to reconstruct the masked areas during inference. **When the mask ratio reached 50%, the EDT-S model using MDT's masking strategy struggled to reconstruct the images, while the EDT-S model using EDT's masking strategy was still able to do so. EDT trained by EDT's masking training strategy demonstrates better image reconstruction capabilities.** This visual analysis demonstrates the effectiveness of the EDT's masking training strategy. --- ***3.The question for the reported FID on CelebA-HQ.*** Due to the limited time and resources, we train 100k steps for both baselines and our proposed EDT for a fair comparison, using the standard training settings. We agree that the FID is not very promising for the optimal result, but the result still can show better performance than the other baseline methods. We will perform a more steps (400k) training experiment for future revision. --- ***4.High resolution result.*** Indeed, the high-resolution result makes the work more convincing. Currently, we are performing the experiment on the Imagenet 512x512, and we will provide the result in the revision.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive comments and insightful suggestions. We carefully add experiments and figures according to the comments of all the reviewers. --- We are encouraged that the reviewers pointed out our work *"is novel"*, *"tackling an important application"* (**R FhNv**); *"well formulated and clearly explained"*, *"unique contributions"*, and *"is noteworthy"* (**R QU5R**); *"reduce the computation of diffusion"* and *"experimental results show promising looks"* (**R 8pvq**); *"a novel way"*, *"pipeline is robust and lightweight"*, and *"provides specific quantitative results"* (**R MD5e**). We address the reviews below and will incorporate all changes in the revision. --- ***Summary and Responses to the Reviewers' Most Concerned Issues:*** --- ***1. The performance improvements by AMM.*** - We integrated AMM into pre-trained EDT, DiT, and MDT, which are transformer-based models. We conducted experiments on ImageNet and Celeba-HQ **in Rebuttal-Table 1**. **The models with AMM obtain lower FID compared to the models without AMM**, which further demonstrates the effectiveness of AMM in different models and datasets. |**On ImageNet 256×256 (class-conditional image synthesis)**|||| |---|---|---|---| |Model|Training images|W/o AMM|W AMM| |EDT-S|400k * 256|42.60|**34.20**| |DiT-S|400k * 256|67.16|**63.11**| |MDTv2-S|400k * 256|39.02|**31.89**| |EDT-XL|400k * 256|12.80|**7.52**| |DiT-XL|400k * 256|18.48|**14.73**| |DiT-XL|7000k * 256|9.62|**3.75**| |**On CelebA-HQ 256 × 256 (unconditional image synthesis)**|||| |Model|Training images|W/o AMM |W AMM| |EDT-S|100k * 240|17.01|**16.60**| |DiT-S|100k * 240|19.12|**18.41**| |MDTv2-S|100k * 240|17.34|**17.11**| **Rebuttal-Table 1**: Evaluation of the performance of pre-trained EDT, DiT, and MDT without and with AMM on ImageNet and CelebA-HQ. --- ***2. Experiments on extra dataset CelebA-HQ 256×256.*** - CelebA-HQ 256×256 is used for the unconditional image synthesis task. We conducted a new experiment, training EDT-S, DiT-S, and MDTv2-S on CelebA-HQ 256×256, **as shown in Rebuttal-Table 2 in the global rebuttal. EDT-S achieved the lowest FID, demonstrating that EDT is also effective in unconditional image synthesis tasks.** |Model|Training images|FID| |---|---|---| |EDT-S|100k * 240|**16.60**| |DiT-S|100k * 240|19.12| |MDTv2-S|100k * 240|17.34| **Rebuttal-Table 2**: Evaluation of unconditional image synthesis on CelebA-HQ. --- ***3. The training cost of EDT.*** - For ImageNet 256×256, we estimated the training cost for EDT, MDTv2, and DiT on a 48GB-L40 GPU **in Rebuttal-Table 3 in the global rebuttal**, using a batch size of 256 and FP32 precision. **EDT achieves the best performance with a low training cost.** (GPU days refer to the number of days required for training on a single L40 GPU.) | Model|Epochs|Training images|GPU days| FID| |---|---|---|---|---| |**EDT-S** |80|102M|**2.75**|**38.73**| |DiT-S|80|102M|2.96|68.40| |MDTv2-S|80|102M|16.47|39.50| |**EDT-B**|80|102M|9.19|**19.18**| |DiT-B|80|102M|**8.62**|43.47| |MDTv2-B|80|102M|26.17|19.55| |**EDT-XL**|80|102M|**37.79**|**7.52**| |DiT-XL|80|102M|39.82|19.47| |MDTv2-XL|80|102M|72.62|7.70| **Rebuttal-Table 3**: The training cost and FID of EDT, DiT, and MDTv2 on ImageNet 256×256 with batch size of 256 and FP32 precision. --- ***4. Analysis of masking training mechanism*** - By observing the loss changes **in Rebuttal-Figure 1 in the PDF of the global rebuttal**, we identified a conflict between $L_{masked}$ (loss when the input consists of the remaining tokens after masking) and $L_{full}$ (loss when the input consists of the full token input) in MDTv2. We separately applied the masking training strategies of MDTv2 and EDT to train diffusion models and extracted $L_{masked}$ and $L_{full}$ values at the 300k~305k training iterations. **As shown in Rebuttal-Figure 1 in the PDF of the global rebuttal**, we visualized the changes of $L_{masked}$ and $L_{full}$. **The left-top of Rebuttal-Figure 1** depicts the loss changes when using MDTv2's masking training strategy. As $L_{full}$ decreases, $L_{masked}$ increases, and vice versa, illustrating the conflict between these two losses. This conflict arises because $L_{masked}$ causes the model to focus on masked token reconstruction while ignoring diffusion training. **As shown in the bottom-left of Rebuttal-Figure 1, both the $L_{full}$ and $L_{masked}$ hardly converged during the 300k to 400k training iterations.** **The right side of Rebuttal-Figure 1** shows the loss changes when using EDT's masking training strategy. With EDT's strategy, $L_{masked}$ and $L_{full}$ exhibit synchronized changes, and **the loss values continuously decrease during the 300k to 400k training iterations.** --- ***5.  The computational process of the AMM in a block.*** - **In Rebuttal-Figure 2 in the PDF of global rebuttal**, we illustrate the process of attention modulation. Image can be split into N patches and each token is the feature of a patch. Each token (patch) corresponds to a rectangular area of the image and has a corresponding 2-D coordinate (x, y) in the image. We calculate an Euclidean distance value $d$ for each pair of tokens, resulting in a distance matrix, which is an N×N tensor. Based on the distance matrix, we generate modulation values $m$ via the modulation matrix generation function $F(d)$, which assigns lower modulation values to tokens that are farther apart. These modulation values form an Attention Modulation Matrix (AMM), another N×N tensor. **Importantly, we integrate the AMM into the pre-trained EDT without any additional training.** The attention modulation matrix is calculated when the model is instantiated. During inference, the modulated attention score matrix is obtained by performing a Hadamard product between the attention modulation matrix and the attention score matrix. Pdf: /pdf/74f8c8d94c18fc2ca87d36fc9315fa8e0ad0c798.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On the Worst Prompt Performance of Large Language Models
Accept (poster)
Summary: The authors propose a new benchmark to study the robustness of LLMs to prompt variations. Different from previous work, this paper mainly focuses on semantically equivalent prompts rather than taks-level instructions. The experiments demonstrate that many popular LLMs are sentitive to the form of prompts. Moreover, it is difficult to detect the worst prompt and improve the worst prompt performance. Strengths: 1. Research Value: The paper tackles a highly relevant problem that has substantial implications for the practical deployment of LLMs. The issue of prompt sensitivity is critical and the paper provides valuable insights into this domain. 2. Clarity of Motivation: The authors have effectively articulated the motivation behind their work. The introduction and the problem statement are clear, setting the stage for the reader to understand the importance of the study. Weaknesses: 1. Lack of Solution: While the paper adeptly identifies the problem of prompt variability and its impact on LLM performance, it falls short of proposing an effective solution to mitigate this issue. The research would benefit from the inclusion of potential remedies or strategies to enhance prompt robustness. 2. Inconsistency in Description: There appears to be a discrepancy between the textual description and the visual representation (Figure 1) regarding the focus on task-level instructions versus case-level inputs. Clarification on this point is needed to avoid any confusion for the readers. 3. Limited Scope of Experimentation: The experiments, while extensive, do not include testing on more advanced models such as GPT4. The performance of the proposed benchmark on state-of-the-art models would provide a more comprehensive understanding of the robustness of the latest LLMs. 4. Language Specificity: The study seems to be limited to English language prompts. Given the global application of LLMs, it would be beneficial to extend the experiments to include other languages to assess the generalizability of the findings. 5. Prompt Diversity Validation: The paper does not explore whether incorporating a diverse set of prompts in the SFT stage could potentially improve model's worst prompt performance. An analysis of this nature would add depth to the paper's contribution. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback. We appreciate that you acknowledge the thorough evaluations and insightful guidelines in our paper. We provide point-to-point responses to address your concerns as follows: **[W1]: potential remedies or strategies to enhance prompt robustness.** **[A1]**: Thank you for your insightful questions. We explored various methods to identify the worst prompt and improve the model's robustness. Unfortunately, the investigated methods are not effective enough to mitigate the problem. We would like to clarify that the spotting of this problem is also a valuable contribution as it has been neglected by previous work. As a consequence, previous solutions fail to address the problem. The contribution of our work lies, first and foremost, in pioneering the shift from task-level instructions to case-level queries and capitalizing on the concept of worst prompt performance. Our benchmark provides a general test for researchers to evaluate a model's ability to provide stable responses for real-world users before deploying LLMs. Our comprehensive experiments highlight the great challenge of identifying the worst prompt and improving their performance in this realistic setting. Our results thoroughly examine existing efforts in reducing model sensitivity to prompt variations and clearly delineate their limitations. We anticipate other attempts like prompt engineering techniques, model retraining with paraphrases, and integrating external knowledge sources can potentially address this problem, and we leave them for further work. **[W2]: There appears to be a discrepancy between the textual description and the visual representation (Figure 1) regarding the focus on task-level instructions versus case-level inputs. Clarification on this point is needed to avoid any confusion for the readers.** **[A2]**: Could you please specify where you perceive a discrepancy in the statement? Currently, we use task-level instructions and case-level inputs to represent the two parts into which previous studies decompose a complete case-level query. In our setting, we study the variations of case-level queries and do not distinguish between instructions and inputs. **[W3]: The experiments, while extensive, do not include testing on more advanced models such as GPT4. The performance of the proposed benchmark on state-of-the-art models would provide a more comprehensive understanding of the robustness of the latest LLMs.** **[A3]**: We selected models based on their popularity and advanced nature. These models represent the state-of-the-art open-source LLMs and are widely adopted in existing research. Due to the limited budget for GPT4, we did not test the performance of the GPT4 model. **[W4]: The study seems to be limited to English language prompts. Given the global application of LLMs, it would be beneficial to extend the experiments to include other languages to assess the generalizability of the findings.** **[A4]**: Thank you for your suggestion. We anticipate that other languages would face similar, possibly more severe, issues given the current LLMs are mostly trained in English. We will investigate whether other languages face the same issues in future work. **[W5]: The paper does not explore whether incorporating a diverse set of prompts in the SFT stage could potentially improve a model's worst prompt performance. An analysis of this nature would add depth to the paper's contribution.** **[A5]**: In fact, current state-of-the-art LLMs (those included in our study) have been extensively fine-tuned using a huge number of diverse prompts. Nevertheless, our experiment results show that they are still not robust enough to address our benchmark. Additionally, the swarm distillation technique we used in our experiments [Line 298-310] is related to the SFT you mentioned. It works by aligning the outputs generated by the model for different paraphrases to improve the consistency of the model's results. Our results indicate that the benefits of this method are not significant. --- Rebuttal 2: Comment: Thank you again for your positive feedback! We are grateful for your recognition of the value of the problem we study and the insights of our findings! We hope our clarifications addressed your comments and we would like to inquire if you have any further questions or require additional information. Would you kindly be open to increasing your score? We are eager to provide any necessary support and continue the dialogue. --- Rebuttal Comment 2.1: Comment: The authors' response has addressed most of my concerns. I will raise my score to 6. --- Reply to Comment 2.1.1: Comment: Thank you for your valuable feedback and the consideration of raising the score after our discussion. We are pleased that our rebuttal has addressed your concerns. We will also address these concerns and incorporate all your suggestions in our paper. Thanks again for your insightful comments that help improve our work greatly!
Summary: This paper introduces a new benchmark, ROBUSTALPACAEVAL, that contains semantically equivalent queries of diverse real-world tasks, and uses it to conduct extensive experiments on ChatGPT and six open-source LLMs. It highlights variability in model performance and difficulties in predicting the worst prompt. Strengths: - This work introduces a new benchmark focused on diverse real-world user queries rather than task-level instructions, which more closely mirrors real-world scenarios. - It conducts comprehensive evaluations across several LLMs and showcases variability in model performance across different prompts. - They tried several methods to improve the worst prompt and show the limitations of current method. Weaknesses: - The methodology utilizes gpt4_turbo as the evaluator and the reference model for outputs, which might introduce biases. A discussion of this potential bias should be included. - The ROBUSTALPACAEVAL benchmark contains 10 paraphrases per given query. It’s unclear how performance disparities between the worst and best prompts would change with an increased number of paraphrases. This might also affect the robustness of cases with significant performance gaps. - While the study explores several LLM families including Llama, Gemma, and Mistral, it includes only one version of ChatGPT and does not specify which version. Whether the prompts are uniform across different ChatGPT versions is unknown. Technical Quality: 3 Clarity: 3 Questions for Authors: - How is prompt perplexity calculated? - Are there any common features among the worst-performing prompts within the same model family? The paper points out the difficulty in predicting the worst prompts, both model-independently and with model access. Given that the overlap rate of model-agnostic worst-4 prompts reaches about 40% in the Gemma family, an analysis comparing the features of the worst and best prompts could be insightful. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper includes a limited selection of LLMs, and the challenge of effectively improving the worst-performing prompts remains unresolved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and for highlighting the strengths of our work. We appreciate your constructive feedback requiring further clarifications. Below, we address each of your points in detail. **[W1]: The methodology utilizes gpt4_turbo as the evaluator and the reference model for outputs, which might introduce biases. A discussion of this potential bias should be included.** **[A1]**: Thank you for your insightful questions. The evaluation metric we use is introduced by the AlpacaEval 2.0 benchmark, which has been proven to achieve high consistency to human evaluations and has been widely adopted by the LLM research community. Additionally, following the common practice, all models are first compared with gpt4_turbo then we compare their win rates. Therefore, if the assumed bias is present, all models are affected, ensuring a fair comparison. We will include the above discussion in our revisions. **[W2]: The ROBUSTALPACAEVAL benchmark contains 10 paraphrases per given query. It’s unclear how performance disparities between the worst and best prompts would change with an increased number of paraphrases. This might also affect the robustness of cases with significant performance gaps.** **[A2]**: Indeed, the worst/best performance decreases/increases monotonically with the number of paraphrases (n). While it is impractical to list all possible paraphrases and calculate exact values, we find that these scores converge quickly with a sufficiently large number of paraphrases. Taking Llama-2-70B as an example, we calculated the worst/best performance and their difference from n=1 to 11 as below (We report the averaged score across all possible combinations and the standard deviations). For other models, we observe the same trend of changes. To balance the evaluation efficiency and robustness, we decided to construct 10 paraphrases for each query. | Metric | n=1 | n=2 | n=3 | n=4 | n=5 | n=6 | n=7 | n=8 | n=9 | n=10 | n=11 | | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | | Worst | 29.18(1.42) | 21.01(1.25) | 17.48(1.26) | 15.4(1.14) | 13.95(1.02) | 12.85(0.92) | 11.95(0.83) | 11.19(0.73) | 10.53(0.59) | 9.93(0.42) | 9.38(0.0) | | Best | 29.18(1.42) | 37.34(1.82) | 41.97(1.77) | 45.17(1.64) | 47.55(1.47) | 49.42(1.28) | 50.94(1.08) | 52.18(0.88) | 53.23(0.67) | 54.11(0.43) | 54.86(0.0) | | Best - Worst | 0.0(0.0) | 16.33(2.48) | 24.49(2.54) | 29.77(2.32) | 33.6(2.06) | 36.57(1.79) | 38.98(1.53) | 40.99(1.25) | 42.7(0.96) | 44.18(0.63) | 45.48(0.0) | **[W3]: While the study explores several LLM families including Llama, Gemma, and Mistral, it includes only one version of ChatGPT and does not specify which version. Whether the prompts are uniform across different ChatGPT versions is unknown.** **[A3]**: The version of ChatGPT we utilize is gpt-3.5-turbo-1106. **[Q1]: How is prompt perplexity calculated?** **[A4]**: We calculate the average log probability of all the tokens in the prompt and take the inverse of the exponential of the average log probability as the perplexity of the prompt. **[Q2]: Are there any common features among the worst-performing prompts within the same model family? The paper points out the difficulty in predicting the worst prompts, both model-independently and with model access. Given that the overlap rate of model-agnostic worst-4 prompts reaches about 40% in the Gemma family, an analysis comparing the features of the worst and best prompts could be insightful.** **[A5]**: We conducted extensive manual analysis and found no discernible difference between the worst- and best-performing prompts that humans can perceive. We observe a high overlap rate of worst prompts among models within the same family, which we hypothesize stems from their shared preferences shaped by factors like the distribution of training data (which is not fully accessible to us), indicating a need for a deeper understanding and knowledge of the LLMs. --- Rebuttal 2: Comment: Thank you again for your positive feedback! We are grateful for your recognition of the value of the problem we study and the insights of our findings! We hope our clarifications addressed your comments and we would like to inquire if you have any further questions or require additional information. Would you kindly be open to increasing your score? We are eager to provide any necessary support and continue the dialogue. --- Rebuttal Comment 2.1: Comment: Thanks to the authors for the detailed response. The rebuttal has addressed most of my concerns. I will keep a positive score. Good luck! --- Reply to Comment 2.1.1: Comment: Thank you for your valuable feedback and for firmly backing the acceptance of our paper. We are glad our rebuttal has addressed your concerns. We promise to incorporate all your suggestions and tackle any remaining issues in our updated paper. Thank you again for your insightful advice and continued support.
Summary: In this paper, the author(s) propose a benchmark for prompt performance of large language models. In particular, the author(s) leverage GPT4 to generate variants of prompts and hence constitute a dataset. Afterwards, the author(s) evaluate the performance of these generated prompts, and explore the identification and improvement of worst cases. Strengths: - This study acknowledges the variation and diversity of prompts in real-world, and adds value to the prompt engineering for LLMs. - The proposed benchmark is applied to multiple LLMs. Thorough comparison analysis is conducted. - The paper is well-organized, and the language is technical yet understandable for readers with domain knowledge. Weaknesses: - The methodology and result analysis of this study can be elaborated. - Result visualization can be improved for better readability. Technical Quality: 3 Clarity: 4 Questions for Authors: - Section 2: A related work can be added and analysed. Schulhoff, S., Ilie, M., Balepur, N., Kahadze, K., Liu, A., Si, C., ... & Resnik, P. (2024). The Prompt Report: A Systematic Survey of Prompting Techniques. arXiv preprint arXiv:2406.06608. - Please elaborate the motivation and result analysis for exploring the hidden states in Section 4.2. - I wonder if the author(s) consider sharing the dataset used in this project? - The author(s) can consider specifying the selection criteria of LLMs used in experiments. - Please consider using bold in tables to highlight the noteworthy data. - It will be interesting to summarize the features/patterns of worst prompts for different LLMs. - Please adjust the place of tables and figures to improve readability. For instance, Figure 3 is on Page 6 while it is described on Page 7. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: - The author(s) note that the research can be extensive by including more LLMs, and applying alternative approaches to identify worst prompts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable insights of our work. We appreciate your thoughtful feedback, and we would like to address your concerns as follows: **[W1]: The methodology and result analysis of this study can be elaborated.** **[A1]**: We appreciate the reviewer's feedback on the need for a more detailed explanation of our methodology and result analysis. We will expand on the rationale behind our method and provide a more thorough interpretation of the results when more space is allowed. **[W2]: Result visualization can be improved for better readability.** **[A2]**: We will work on enhancing the clarity and effectiveness of our visual presentations, ensuring that key findings are prominently displayed. **[Q1]: Section 2: A related work can be added and analysed. Schulhoff, S., Ilie, M., Balepur, N., Kahadze, K., Liu, A., Si, C., ... & Resnik, P. (2024). The Prompt Report: A Systematic Survey of Prompting Techniques. arXiv preprint arXiv:2406.06608.** **[A3]**: Thank you for your suggestion. This survey has a nice summary of existing prompting techniques and we are happy to include this work in our next version. **[Q2]: Please elaborate the motivation and result analysis for exploring the hidden states in Section 4.2.** **[A4]**: As demonstrated by many previous works, hidden states can reflect a model's perception and preferences. Our motivation for studying hidden states is to determine whether there is a correlation between the model's own encoding of the prompt and its performance. We conducted two experiments [Line 243-259]. The PCA results showed no clear boundary between the representations of prompts with different performance levels; in the other experiment, we trained a classifier to predict which prompt the model performs better on based on the hidden states, and found that the trained classifier only achieved about 50% accuracy on the test set. The results of both experiments indicate that it is difficult to predict the model's performance based on the model's representation of the prompt. **[Q3]: I wonder if the author(s) consider sharing the dataset used in this project?** **[A5]**: We have released the dataset used in this project. We will ensure that the dataset is properly documented. **[Q4]: The author(s) can consider specifying the selection criteria of LLMs used in experiments.** **[A6]**: We selected models based on their popularity and advanced nature. These models represent the state-of-the-art open-source LLMs and are widely adopted in existing research. **[Q5]: Please consider using bold in tables to highlight the noteworthy data.** **[A7]**: We will implement this to make the most significant findings and differences between conditions more apparent at a glance. **[Q6]: It will be interesting to summarize the features/patterns of worst prompts for different LLMs.** **[A8]**: We conducted extensive manual analysis and found no discernible difference between the worst- and best-performing prompts that humans can perceive. We observe a high overlap rate of worst prompts among models within the same family, which we hypothesize stems from their shared preferences shaped by factors like the distribution of training data (which is not fully accessible to us), indicating a need for a deeper understanding and knowledge of the LLMs. **[Q7]: Please adjust the place of tables and figures to improve readability. For instance, Figure 3 is on Page 6 while it is described on Page 7.** **[A9]**: We will review and adjust the positioning of these elements to ensure they appear as close as possible to their first reference in the text. --- Rebuttal 2: Comment: Thank you again for your positive feedback! We are grateful for your recognition of the value of the problem we study and the insights of our findings! We hope our clarifications addressed your comments and we would like to inquire if you have any further questions or require additional information. Would you kindly be open to increasing your score? We are eager to provide any necessary support and continue the dialogue.
Summary: This paper proposes a new benchmark, RobustAlpacaEval, a benchmark with semantically equivalent queries. The authors evaluate performance as the worst performance across all the prompts and show that many prompt consistency methods have a limited improvement on this benchmark. The paper also shows that it is difficult to predict which prompts will have a poor performance. Strengths: - The paper has a variety of key findings (Section 3.2). First, the authors find that there is a large gap between worst and best performance. Second, the authors find that scaling does not help robustness. Third, the authors find that there is little agreement between various models on which prompts are harder and/or easier. - The experiments are thorough and well conducted. - The findings open up doors for future research and efforts to make models more robust. Weaknesses: - The variance of a model to different prompts has been studied several times in the literature (for example, see [1]), and it is unclear to me how a benchmark measuring worst-case performance would be used. I would appreciate if the authors provided more clarity on how they envision that this work could shed insight and facilitate future efforts. - In Section 5, only Llama models are considered, which makes it unclear whether the insights discovered also apply to other models or are unique to Llama. - The paper would be much stronger if the authors proposed a method to mitigate the phenomenon demonstrated in the paper, or if the authors gave a detailed explanation of why the identified phenomenon occurs. [1] https://arxiv.org/abs/2401.03729 Technical Quality: 3 Clarity: 3 Questions for Authors: - Can the authors suggest some potential directions for resolving the issue raised in the paper? - When creating a benchmark like this, the number of prompts can be increased arbitrarily, and the accuracy could be extremely low if the right prompts are chosen. Similarly, one could create an adversarial setting to test the robustness of each model. Why did you design the benchmark in this way? - How do the authors envision that this benchmark be used? For example, should all future models be forced to evaluate on RobustAlpacaEval? Because there are many more prompts for each sample, this would lead to a multiplicative factor on the cost for inference. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your in-depth review and the recognition of our novelty, the importance, and the empirical results. We understand your concerns and address them with the following clarifications. **[W1]: relation to prior work & future impact of our work.** **[A1]**: Thank you for your insightful questions. Our work significantly differs from most previous research in two aspects. First, we focus on the model's robustness when faced with prompts that have the same meaning, unlike methods that introduce intentional perturbations, potentially affecting the semantics (e.g., [1] focuses on a few special prompt perturbations, including changes in output format, jailbreaks, and tipping strategies, which can alter the semantics of the original prompts). Second, We investigate the impact of paraphrasing case-level user queries, which is more aligned with real-world needs compared to past studies that focused solely on task-level instructions. We will add the discussion about [1] in our revisions. The significance of our work lies, first and foremost, in pioneering the shift from task-level instructions to case-level queries and capitalizing on the concept of worst prompt performance. Our benchmark provides a general test for evaluating a model's ability to provide stable responses for real-world users before deploying LLMs. Our experiments highlight the great challenge of identifying the worst prompt and improving their performance in this realistic setting. Our results thoroughly examine existing efforts in reducing model sensitivity to prompt variations and clearly delineate their limitations. **[W2]: results for other models.** **[A2]**: Thanks for your constructive suggestion. We would like to emphasize that we have already presented the results using LLMs of different scales in Table 4. For other model families, we observed similar phenomena and omitted the discussion for saving space. For example, the results on Gemma-1.1-2b-it are given in the following table. We can see that the general conclusions (Line 311 to 325) regarding the effect of different methods (self-refinement, voting, and distillation) are consistent with those of the llama family. | Method | Orig. Perf.↑ | Worst Perf.↑ | Best Perf.↑ | Avg. Perf.↑ | Standard Dev.↓ | | :----: | :----: | :----: | :----: | :----: | :----: | | Raw | 16.32 | 4.42 | 36.6 | 15.27 | 11.78 | | Self-Refinement | 6.75(-9.57) | 0.03(-4.39) | 18.44(-18.16) | 4.55(-10.72) | 6.06(-5.72) | | Voting |14.30(-2.02) | 14.30(+9.88) | 14.30(-22.30) | 14.30(-0.97) | - | | Distillation | 13.08(-3.24) | 1.67(-2.75) | 31.36(-5.24) | 11.28(-3.99) | 9.93(-1.85) | **[W3]: Mitigation or explanation of the phenomenon.** **[A3]**: We explored various methods to identify the worst prompt and improve the model's robustness. Unfortunately, all these methods are not effective enough to mitigate the problem. In addition to the automatic analysis presented in the paper, we conducted extensive manual analysis on the worst prompt and found no discernible difference between the worst- and best-performing prompts that humans can perceive. We believe that understanding and addressing this problem requires a deeper understanding and knowledge of the LLMs. For instance, we observe a high overlap rate of worst prompts among models within the same family, which we hypothesize stems from their shared preferences shaped by factors like the distribution of training data (which is not fully accessible to us). **[Q1]: Potential directions for resolving the issue.** **[A4]**: Our research reveals that existing efforts for reducing model sensitivity are not as effective in our setup. We are actively exploring strategies such as prompt engineering techniques, model retraining with paraphrases, and integrating external knowledge sources to alleviate this issue. **[Q2]: the number of prompts & designing the benchmark.** **[A5]**: The worst/best performance decreases/increases monotonically with the number of paraphrases (n). While it is impractical to list all possible paraphrases and calculate exact values, we find that these scores converge quickly with a sufficiently large number of paraphrases. Taking Llama-2-70B as an example, we calculated the worst and best performance from n=1 to 11 as below (We report the averaged score across all possible combinations and the standard deviations). For other models, we observe the same trend of changes. To balance the evaluation efficiency and robustness, we decided to construct 10 paraphrases for each query. | Metric | n=1 | n=2 | n=3 | n=4 | n=5 | n=6 | n=7 | n=8 | n=9 | n=10 | n=11 | | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | | Worst | 29.18(1.42) | 21.01(1.25) | 17.48(1.26) | 15.4(1.14) | 13.95(1.02) | 12.85(0.92) | 11.95(0.83) | 11.19(0.73) | 10.53(0.59) | 9.93(0.42) | 9.38(0.0) | | Best | 29.18(1.42) | 37.34(1.82) | 41.97(1.77) | 45.17(1.64) | 47.55(1.47) | 49.42(1.28) | 50.94(1.08) | 52.18(0.88) | 53.23(0.67) | 54.11(0.43) | 54.86(0.0) | It is noteworthy that the construction of our benchmark is not tailored to any specific model but provides a general testing method to evaluate the robustness of models when faced with semantically equivalent but differently phrased instructions. As the experimental results show, different models exhibit similar fluctuations in their performance, while the worst prompts for each model are unique. **[Q3]: The usefulness of the benchmark.** **[A6]**: We envision ROBUSTALPACAEVAL as a tool in the broader toolkit for evaluating LLMs. Models can be evaluated on this benchmark to ensure they meet a minimum standard of robustness. While the increased number of prompts raises computational costs, this is balanced by the critical insight it provides into model reliability. --- Rebuttal 2: Comment: Thank you again for your positive feedback! We are grateful for your recognition of the richness of our experiments and the contribution of our research to the community! We hope our clarifications addressed your comments and we would like to inquire if you have any further questions or require additional information. Would you kindly be open to increasing your score? We are eager to provide any necessary support and continue the dialogue. --- Rebuttal Comment 2.1: Comment: Thanks to the authors for their response. The provided method certainly does measure some sort of robustness metric. In my humble opinion, this level of robustness is one step above just using one prompt, but is still far away from measuring robustness against real and adversarial attacks. In any case, I am still supporting the acceptance of the paper, and I will keep my score, which is above the acceptance threshold. --- Rebuttal 3: Comment: Thank you for your prompt reply. We fully agree with your view that "it is still far away from measuring robustness against real and adversarial attacks," and we would like to share some of our thoughts with you. Firstly, while robustness against attacks is a critical security issue, our focus is more on model stability, particularly its response to prompts with varying but equally clear expressions of the same meaning. We observe that most users do not deliberately attack models, but their distinct language styles can unconsciously influence the quality of the model's outputs. We believe that model stability is an area that deserves more attention, and the "worst prompt performance of models" we have proposed is an often-overlooked issue. Our work presents a dedicated and comprehensive analysis of measuring, predicting, and improving worst prompt performance. We hope our paper will inspire further research in this area. Lastly, we appreciate your explicit statement of **“supporting the acceptance of the paper”**. However, a borderline accept score typically indicates uncertainty and a significant chance of rejection, in contrast to a weak accept or higher score, which signals a clear intent to publish. We kindly request that you consider adjusting the score upwards. Thanks again for your insightful comments and encouraging reply.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper studies the worst performance an LLM can have on input queries by testing on paraphrases of each query and report the original, worst, best, and average performance. It has been found that there is a large gap between the best and worst performance. It is then found that there is no particular "worst prompt" among all models or even models from the same family. Furthermore, model-dependant features like are also not useful in identifying the worst prompt. Finally, authors explored a few approaches to try to improve the "worst performance". In particular, voting seems effective in improving the lower bound at the cost of harming the upper bound. Strengths: - This paper discusses an important question to study, that is the lower bound performance of LLMs on different prompts. It is interesting to know that identifying the prompt that leads to bad performance is difficulty by looking at the prompts alone or using an LLM itself. Weaknesses: - I think the definition of the "worst" or "best" performance is not clearly stated (line 121). If I understood correctly, the worst performance will monotonically drop if we have more paraphrases of each query. This is linked to the question below and I hope that the authors can clarify on this. - Using paraphrases to study prompt variation could introduce errors and paraphraser/LLM preference mismatch. It might be questioned that Table 1 and Table 4's average performance is almost always lower than the original performance. Does this mean that the paraphrasing process has introduced errors, domain shifts, etc? Technical Quality: 3 Clarity: 3 Questions for Authors: - I wonder if you could provide a clear and reproducible definition of the worst and best performance in mentioned line 121. From the writing, I think regarding whether a query is correctly responded by the model: worst=all(prompt1, prompt2, ...) and best=any(prompt1, prompt2, ...). Is this correct? - If this is the case, I think the more paraphrases we have, the higher we get for the best performance and vice versa for the worst performance. This might make the number of paraphrases a bit arbitrary. The worst-best gap might also be inflated. - Throughout the paper, starting with line 15, I think the phrase "worst prompt" is not precise---we cannot call them good or bad per se unless proven. I think that you mean the prompt paraphrase that leads to the worst performance. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper touches on limitations like not experimenting enough models and enough techniques in finding the "worst prompt". Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and highlighting our work's strengths. We appreciate your constructive feedback on areas needing further elaboration. We address each of your points below. **[W1 & Q1]: definition of the "worst" or "best" performance & the number of paraphrases** **[A1]**: Thank you for your insightful questions. Your understanding is essentially correct. In our study, "worst" and "best" performance refer to the lowest and highest performance among all paraphrases of a query. The only correction needed is that the metric we use is based on the evaluator scoring the model's output on a continuous scale from 0 to 100. Therefore, we have: worst=min(prompt1, prompt2, …), and best=max(prompt1, prompt2, …). Indeed, the worst/best performance decreases/increases monotonically with the number of paraphrases (n). While it is impractical to list all possible paraphrases and calculate exact values, we find that these scores converge quickly with a sufficiently large number of paraphrases. Taking Llama-2-70B as an example, we calculated the worst/best performance and their difference from n=1 to 11 as below (We report the averaged score across all possible combinations and the standard deviations). For other models, we observe the same trend of changes. To balance the evaluation efficiency and robustness, we decided to construct 10 paraphrases for each query. | Metric | n=1 | n=2 | n=3 | n=4 | n=5 | n=6 | n=7 | n=8 | n=9 | n=10 | n=11 | | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | | Worst | 29.18(1.42) | 21.01(1.25) | 17.48(1.26) | 15.4(1.14) | 13.95(1.02) | 12.85(0.92) | 11.95(0.83) | 11.19(0.73) | 10.53(0.59) | 9.93(0.42) | 9.38(0.0) | | Best | 29.18(1.42) | 37.34(1.82) | 41.97(1.77) | 45.17(1.64) | 47.55(1.47) | 49.42(1.28) | 50.94(1.08) | 52.18(0.88) | 53.23(0.67) | 54.11(0.43) | 54.86(0.0) | | Best - Worst | 0.0(0.0) | 16.33(2.48) | 24.49(2.54) | 29.77(2.32) | 33.6(2.06) | 36.57(1.79) | 38.98(1.53) | 40.99(1.25) | 42.7(0.96) | 44.18(0.63) | 45.48(0.0) | **[W2]: Does the paraphrasing process introduce errors, domain shifts, etc?** **[A2]**: We would like to clarify that we manually reviewed and revised all paraphrases to ensure semantic equivalence and human-like fluency, for which errors or domain shifts are largely prevented. Regarding the relationship between the average and original performance, we consider the instances where the average performance is lower than the original to be coincidental, as there are also instances where the opposite is true (2/7 in Table 1, 5/12 in Table 4). Note that the original performance is measured only by a single paraphrase. The randomness further underscores the value of our benchmark setup. **[Q2]: Clarification on the phrase "worst prompt".** **[A3]**: Thank you for pointing this out. We intend to use "worst prompt" to refer to the paraphrase of a query that causes the model to perform the worst. We will clarify this in our next version. --- Rebuttal 2: Comment: Thank you for the response. I think my questions are largely resolved. I have adjusted my score. Good luck. I agree with Reviewer H71g that a few other papers that have studied prompt variations and robustness can be referenced. --- Rebuttal Comment 2.1: Comment: Thank you for your positive feedback! We are glad that our rebuttal addresses your concerns. We will also address these concerns and incorporate all your suggestions in our paper. Thanks again for your insightful comments that help improve our work greatly.
null
null
null
null
null
null
Universality in Transfer Learning for Linear Models
Accept (poster)
Summary: The paper derives a new universality result, which can be leveraged to solve a mirror descent optimization problem for a large family of data distributions by relating the solution to a Gaussian distribution with matching first and second-order statistics. This result is then applied to analyze transfer learning for linear models in the setting of regression and binary classification. Strengths: The paper derives a new universality result by adapting an existing method. The universality is used to extend results about transfer learning in linear regression models to a larger class of data distributions, and further results are developed in the setting of binary classification. The paper is overall well-written and easy to follow. Weaknesses: Parts of Assumptions 2 and 3 seem unintuitive to me, see the questions below. Technical Quality: 3 Clarity: 3 Questions for Authors: In assumption 2.2, why should the covariance depend on the mean $\mu$? In assumption 2.3 and 3, how can the condition hold for any matrix of bounded operator norm? It seems possible to construct a matrix for which the condition does not hold. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and suggestions to help us improve the quality of our work. Below we address the questions and other points raised in the review one by one. **In assumption 2.2, why should the covariance depend on the mean $\mu$** We apologize for the confusion. Assumption 2.2 contains a typo that we will make sure to fix in the final version. It should indeed be $\mathcal{N}(a^T\mu, a^T\Sigma a)$. We would also like to remark that this typo was contained only in assumption 2.2 and did not affect our proofs and calculations. **In assumption 2.3 and 3, how can the condition hold for any matrix of bounded operator norm? It seems possible to construct a matrix for which the condition does not hold.** We will illustrate this property in the case when $\eta \sim \mathcal{N}(0, \frac{I_d}{d})$ so that asymptotically $\lVert \eta \rVert \rightarrow 1$. One can then first replace $C$ with $\frac{C + C^T}{2}$ making it symmetric and then diagonalize the latter, meaning we can replace $C$ by $diag(\lambda_1, \dots, \lambda_d)$ where all $|\lambda_i|$ are bounded by a constant $K>0$. Then $\eta^TC\eta$ has mean $\frac{\lambda_1 + \dots + \lambda_d}{d} = \frac{Tr(C)}{d}$ and variance $\frac{2(\lambda_1^2 + \dots + \lambda_d^2)}{d^2} \le \frac{2K^2}{d}$. The latter implies that the variance goes to zero, and therefore the difference $\eta^TC\eta - \frac{Tr(C)}{d}$ is close to zero with high probability. As it might be a bit counter intuitive, let us consider the case of low rank $C$ in more detail , for example $C= e_1e_1^T$, where $e_1$ is the first basis vector. Then $\eta^TC\eta$ is a $\chi^2$ random variable but it still converges to $0$ because of its normalization. Note that $\frac{Tr(C)}{d} = \frac{1}{d}$ also goes to $0$ and thus both quantities $\eta^TC\eta$ and $\frac{Tr(C)}{d}$ go to zero as $d \to \infty$. --- Rebuttal Comment 1.1: Comment: We thank the authors for their response. - Thanks for the clarification on assumption 2.2. - I am still unsure about a few points about assumption 2.3 and 3, perhaps it is just a misunderstanding on notation/vocabulary. Based on the reviewer's response, it seems as if we need to choose a specific sequence of matrices $C_1, C_2, \dots$ for each dimension $d$ as we take $d \to \infty$. Could the reviewer kindly provide formal mathematical statement corresponding to these assumptions? If this can be clarified, I would be happy to raise my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their very constructive comment. We are sorry for formulating these assumptions somewhat loosely and would like to thank the reviewer for bringing it to our attention. We will make sure to resolve this in the final version. First, our universality result is asymptotic and holds when $d \to \infty$ (albeit in simulations we already see a very close match between the generalization errors for the non-Gaussian data and the matching Gaussian data in the sense of Definition 3 when $d$ is on the order of hundreds). That is, formally we assume that there is a growing sequence of dimensions $d_i \to \infty$ along with sequences of initialization points $w_0^{(i)} \in \mathbb{R}^{d_i}$ and random vectors $a^{(i)} \in \mathbb{R}^{d_i}$ satisfying certain technical properties. Then for a certain class of optimization objectives in each of these dimensions we evaluate their value on the data matrix $A^{(i)}$ sampled from $a^{(i)}$ and on its matching Gaussian matrix $G^{(i)}$ and show that the difference between the values goes to zero in expectation as $i \to \infty$. As Assumptions 2.3 and 3 are very similar, we will proceed to explain formally what we mean in Assumption 3. As explained earlier, we have a sequence of $w_0^{(i)} \in \mathbb{R}^{d_i}$ and thus a sequence of $\eta^{(i)} \in \mathbb{R}^{d_i}$ normalized according to $\mathbb{E}_{\eta^{(i)}} || \eta^{(i)} || ^ 2 = 1 - r$. Now Assumption 3 says: *Given any sequence of matrices $C_i \in \mathbb{R}^{d_i \times d_i}$ satisfying $||C_i||_{op} \le K$ for some constant $K>0$ and all $i>0$, it holds that ${\eta^{(i)}}^TC_i\eta^{(i)} - \frac{Tr(C_i)(1-r)}{d_i} \to 0$ in probability as $i \to \infty$.* As an example, note that Assumption $3$ holds for the sequence of Gaussian vectors $\eta^{(i)} \sim \mathcal{N}(0, \frac{(1-r)}{d_i}I_{d_i})$ for any sequence $d_i \to \infty$. --- Rebuttal 2: Comment: I am still not fully convinced of the current explanation. I understand (3) and (4) in the sense that for any fixed $x$, the LHS converges in probability to the RHS. However, equation (2) is a separate minimization problem for every $\mu$, and there is no guarantee that for a given $\mu$, equations (3) and (4) hold for all relevant values of $x$. Please let me know if there are any details I missed. --- Rebuttal Comment 2.1: Comment: We would like to thank the reviewer for their thorough attention to details. This and the previous questions are instrumental for improving the readability of our proofs. The key consideration in this argument is that $x$ has a fixed dimension. Passing to limits in probability inside low-dimensional optimization problems is a common technical step in many papers applying Gaussian Comparison Inequalities to analyze problems in machine learning. There are two known ways to justify such steps formally: either by referring to different forms of the "convexity lemma" , such as Theorem II.1 from "Cox's Regression Model for Counting Processes: A Large Sample Study" by Andersen and Gill or by constructing other covering arguments tailored to a specific problem of interest. The former normally suffices to justify exchanging finite-dimensional optimizations and limits but cannot be applied directly in the proof of Lemma 1 because the $Q$-function is neither convex nor concave (albeit $Q(t)$ is convex when $t>0$ and concave otherwise, so maybe there is a smart way to use it here). It is also worth mentioning that the proof of Lemma 1 is the only proof in the paper where the convexity lemma does not suffice to ensure concentration of the encountered finite-dimensional objectives of interest. But another standard covering argument can be used in this case to swap the optimization of a fixed dimension with the limit in probability. Indeed, one can restrict $x$ to the unit sphere, since $w$ is defined uniquely only up to the direction and therefore so is $x$ due to the uniqueness of the correspondence between $x$ and $w(x)$. One would then proceed to take an $\epsilon$-net on the sphere, reduce the desired concentration bound for the objective to the union bound over the centers of the net due to the Lipschitzness of the objective assuming $\epsilon$ is taken to be small enough and deduce that this union bound goes to zero because it has a fixed number of terms each of which goes to zero.
Summary: The paper investigates the application of transfer learning within the framework of linear models. The authors focus on a model-based approach, where a model pre-trained on a source distribution is fine-tuned on a few samples from a target distribution. Authors extend the concept of universality, traditionally used in random matrix theory and high-dimensional statistics, to the context of transfer learning. They demonstrate that certain properties of linear models trained on large datasets can be transferred to new tasks with minimal data. They also provide a thorough theoretical analysis, establishing conditions under which the transfer learning approach achieves performance close to that of a model trained directly on the target task with abundant data. Strengths: Universality has traditionally been applied in random matrix theory and high-dimensional statistics, but not in the context of transfer learning. The authors show properties of linear models trained on large datasets can effectively transfer to new tasks with limited data. The authors explore the conditions under which their transfer learning approach is effective. Combination of theoretical analysis and empirical validation provides a comprehensive exploration of this concept. Weaknesses: 1. The empirical validation, while convincing, is somewhat limited in scope. The experiments are conducted on a few specific tasks and datasets, which may not fully capture the diversity of real-world applications. Datasets with different characteristics, such as varying levels of noise, feature distributions, and dimensionalities, could provide a more comprehensive assessment of the proposed methods. 2. The comparison with existing transfer learning methods is limited. The paper could benefit from a more comprehensive comparison to highlight the advantages and limitations of the proposed approach relative to current state-of-the-art methods. 3. The paper provides certain properties of linear models trained on large datasets can be transferred to new tasks with minimal data, but in real applications, is there any indication, when transfer methods are useful? Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Can you provide more details on the boundary conditions under which the universality principle might fail? Are there specific cases or distributions where this principle does not hold? 2. Can you elaborate on the choice of datasets and tasks used for empirical validation? Are there plans to test the proposed method on more diverse and complex datasets? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: While the authors provide thorough theoretical proofs and some experimental validation, the paper lacks comparative experiments with existing methods. This makes it difficult to fully evaluate the practical advantages and limitations of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and suggestions to help us improve the quality of our work. Below we address the questions and other points raised in the review one by one. **1. The empirical validation, while convincing, is somewhat limited in scope. The experiments are conducted on a few specific tasks and datasets, which may not fully capture the diversity of real-world applications. Datasets with different characteristics, such as varying levels of noise, feature distributions, and dimensionalities, could provide a more comprehensive assessment of the proposed methods.** We varied levels of noise, feature distributions and dimensionalities in the synthetic examples presented in the paper. This being said, in the final manuscript, we will extend our experiments to some real-world datasets, showcasing that universality holds for them, too, and would like to thank the reviewer for making this point. **2. The comparison with existing transfer learning methods is limited. The paper could benefit from a more comprehensive comparison to highlight the advantages and limitations of the proposed approach relative to current state-of-the-art methods.** We do not claim to propose a novel transfer learning method, but, rather, to analyze theoretically a popular approach to transfer learning. We apologize for any confusion and will clarify this in the final version of the manuscript. **3. The paper provides certain properties of linear models trained on large datasets can be transferred to new tasks with minimal data, but in real applications, is there any indication, when transfer methods are useful?** To make our analyses applicable to the more real life scenarios, we would need to extend them to the deep models. We believe it should be possible to do by linearizing the networks via the Neural tangent Kernels approach but it requires verifying certain technical properties and is left as a subject for future work. **1. Can you provide more details on the boundary conditions under which the universality principle might fail? Are there specific cases or distributions where this principle does not hold?** We provide an example from Han, Q. and Shen, Y. (2023). ``Universality of regularized regression estimators in high dimensions", *The Annals of Statistics*. Take $x$ to be of the form $qg$, where $q$ is a Bernoulli random variable (1 with probability $p$ and 0 with probability $1-p$) and $g \sim \mathcal{N}(0, \frac{I_d}{d})$, them Han and Shen show that Gaussian universality does not hold for the ridge regression objective. In terms of our assumptions, this distribution violates the variance property from Definition 2.5. **2. Can you elaborate on the choice of datasets and tasks used for empirical validation? Are there plans to test the proposed method on more diverse and complex datasets?** We would be happy to extend our experiments to the real-world datasets, comparing the test errors for models learned from these datasets and from the Gaussian mixtures with matching means and covariances. We will include the results in the final version of the paper and we appreciate the reviewer's suggestion.
Summary: The paper studies model-based transfer learning. In this setting, a model is pre-trained on the source data, and the learner aims to fine-tune it on the target data by running SGD initialized at the pre-trained model. The paper focuses on linear regression and classification, aiming to generalize the assumption of Gaussianity to general distributions and investigate the scenarios where fine-tuning on the target data can or cannot be beneficial. Strengths: The paper derives a universality result that allows replacing Gaussian data with more general distributions. It then analyzes the generalization error in linear regression and classification for SGD initialized at a model pre-trained on the source data. The study shows that when the noise in the data is high, fine-tuning can be harmful, and just using the model pre-trained on the source data results in a lower error. Conversely, when the noise is less than the error of the model pre-trained on the source data, transfer learning improves test performance. Weaknesses: The paper attempts to generalize the assumption of Gaussianity but is limited to the linear model, which is inherently restrictive. Furthermore, it focuses exclusively on a specific algorithm—SGD initialized at the model pre-trained on the source data—without demonstrating the optimality of these results or determining whether better results are achievable with alternative algorithms. Technical Quality: 3 Clarity: 2 Questions for Authors: It appears that there are some parameters in the paper that have not been defined. In line 194, "p" is used without definition. In equation (7), \kappa is referenced without definition. In line 206, it's unclear what "p(r)" represents. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The paper does not have any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and suggestions to help us improve the quality of our work. Below we address the questions and other points raised in the review one by one. **The paper attempts to generalize the assumption of Gaussianity but is limited to the linear model, which is inherently restrictive. Furthermore, it focuses exclusively on a specific algorithm—SGD initialized at the model pre-trained on the source data—without demonstrating the optimality of these results or determining whether better results are achievable with alternative algorithms.** For extending the current approach to deep architectures, one could linearize the network using the Neural Tangent Kernel approach and carefully verify all necessary technical details. We leave this as a direction for future work. We do not prove universality only for the SGD but rather for a certain class of *objectives* (described in the statement of Theorem 1) that could potentially be optimized using other optimization methods and would still exhibit universality. Our results apply to the objective coming from the implicit regularization property of the SGD but they extend far beyond that, and also even this objective could potentially be solved by other methods. Finally, studying optimality of SGD for transfer learning goes beyond the scope of this work. We only claim to analyze rigorously a popular approach to transfer learning. **In line 194, "p" is used without definition.** Line $194$ was meant to say that the empirical spectral densities $\hat{p}_N$ converge to some limit, that we will denote by $p$. That is, $p$ is the limiting spectral density. We are sorry for the confusion it arose and will make it more clear in the final version. **In equation (7), $\kappa$ is referenced without definition.** $\kappa$ is defined in the beginning of line $104$ as the ratio of the number of parameters and the number of data points in the target distribution. This being said, we understand that it's a good practice to remind the reader of the notation within the theorem itself and will do so in the final version of the paper. We will clarify this in the final manuscript. **In line 206, it's unclear what "p(r)" represents.** This is the same limiting spectral density $p$ as in line $194$ written as a function of its argument $r$.
Summary: This work considers transfer learning (fine-tuning) for linear models in the over-parameterized regime in the proportional regime $k / n \rightarrow \infty$. In this setting, gradient descent converges to the solution a convex optimization problem with linear constraints. This work builds on this result to show that for a distribution $P$, the test error converges to the test error of a Gaussian distribution with mean and covariance equal to that of $P$---in other words, universality holds. Strengths: This work brings a fairly atypical and and relatively unexplored perspective to transfer learning. The results hold for a large family of distributions (theorem 1), although limited to linear models in the infinite width regime. Finally, the results indicate that Transfer learning depends on the noise levels of the problem, i.e., one should choose to fine-tune the model if the noise level is very high (remark 1). I unfortunately did not check all the proofs carefully but appreciate the authors attempt at distilling the results into the main paper despite the depth of results. Weaknesses: My main concern is that the setting is not conducive to studying transfer learning. Transfer learning is a fundamentally small-data phenemenon, i.e., the target dataset usually contains a limited number of samples. However, the results (in the proportional regime) do not include: (1) The number of samples in each task or (2) the distance between the two tasks. The only quantity that features in the equations is the noise level. I also believe the presentation could be better. For example, some text explaining theorem 1 (in english) or explaining what $\theta$ and and $t$ represent in theorem 2 could be helpful to a reader. I also wonder why the authors chose to call in model-based transfer. I have not heard of this specific term before and it would help if the authors cited previous references to model-based transfer learning. Isn't fine-tuning a more appropriate term. Finally, the looming question is the connection to deep networks. While perhaps out of the scope of this paper, there are questions about how this affects or relates to the practice of deep learning. Are linear models in the proportional regime informative for how we should do Transfer learning in practice? Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Why is renormalization needed in in the classification problem setting? 2. In definition 2, the authors claim that the assumptions are satisfied in practice. Could the authors expand on line 156-160 and explain the results of Seddik et al. ? 3. How do I interpret quantities like $t$ and $\theta$ in theorem 2? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: Limitations are addressed but the authors could comment on the assumptions and their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and suggestions to help us improve the quality of our work. Below we address the questions and other points raised in the review one by one. **My main concern is that the setting is not conducive to studying transfer learning. Transfer learning is a fundamentally small-data phenemenon, i.e., the target dataset usually contains a limited number of samples. However, the results (in the proportional regime) do not include: (1) The number of samples in each task or (2) the distance between the two tasks. The only quantity that features in the equations is the noise level.** We apologize for any possible lack of clarity in the exposition. We look at the problem in the following way: instead of explicitly defining the source distribution, and the number of samples from it, we capture the effect of training on the source distribution through the pre-trained solution $w_0$. We use $w_0$ as the initialization point when fine tuning on the target distribution. Then, for example, in the context of regression [Theorem 2, eq.(7)], the distance between the tasks (i.e., the source and target distributions) is essentially reflected by $e_a$, which is defined as the generalization error of the source-trained model, $w_0$, on the target distribution. The number of samples from target distribution is included in our analysis and is reflected by the quantity $\kappa = \frac{d}{n}$, where $d$ is the number of parameters and $n$ is the number of samples. We will do our best to introduce these clarifications in the final version of the paper. **I also believe the presentation could be better. For example, some text explaining theorem 1 (in english) or explaining what $\theta$ and $t$ and represent in theorem 2 could be helpful to a reader. I also wonder why the authors chose to call in model-based transfer. I have not heard of this specific term before and it would help if the authors cited previous references to model-based transfer learning. Isn't fine-tuning a more appropriate term.** Theorem 1 says that in a certain technical sense we can examine different properties of the weights trained on data matrix $A$ satisfying Definition 2 by relating them to the properties of the weights trained on its matching Gaussian matrix $G$ (see Definition 3). We will add an expanded version of this explanation into the final version and would like to thank the reviewer for making this point. Please refer to the answer to the question 3 below for a discussion about $\theta$ and $t$ from Theorem 2. We learned of the terms "instance-based transfer learning" and "network-based transfer learning" from the review paper by C. Tan et al. "A survey on deep transfer learning". We then preferred to use "model-based" instead of "network-based" as we introduced these terms only to explain that there are two main approach to transfer learning, one of which (instance-based) mixes the target and the source distributions together and the other keeps the distributions untouched and transfers the *model* from the source to the target. This being said, we agree that calling it fine-tuning could have been a better choice and we will take it into account for the final version. **Finally, the looming question is the connection to deep networks. While perhaps out of the scope of this paper, there are questions about how this affects or relates to the practice of deep learning. Are linear models in the proportional regime informative for how we should do Transfer learning in practice?** As far as we understand, sometimes in practice one pre-trains a deep model and then fine-tunes only the last layer on the target distribution (see, e.g., "Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks" by Mao, Yuzhen et al.), essentially making the model linear for our purposes. This is the most straightforward potential application of our results to the real world models. Another way to extend our results to deep models could be linearizing them using the Neural Tangent Kernel approach. We leave both of these directions for future work as pursuing them requires careful verification of several technical steps. **1. Why is renormalization needed in in the classification problem setting?** We find such a renormalization meaningful because, in the classification setting, the outputs of the model depend only on the direction of the vector of weights but not on its magnitude. More explicitly, $w_0$ and $cw_0$ for $c \ne 0$ have the same performance making them equally good solutions. However, initializing at $w_0$ and $cw_0$ yields different solutions after training on source distributions. Putting all these remarks together, we would like to ensure that the output of the fine-tuning step depends only on the directions of $w_0$, which is exactly what the renormalization step does. **2. In definition 2, the authors claim that the assumptions are satisfied in practice. Could the authors expand on line 156-160 and explain the results of Seddik et al.?** The Lipschitz Concentration Property (LCP) defined on line 156 says that for any Lipschitz $\phi: \mathbb{R}^d \to \mathbb{R}$ the tails of $\phi(x)$ decay very fast, where $x \in \mathbb{R}^d$ comes from the data distribution. Seddik et al. show that any distribution generated from a GAN with weight matrices of bounded norm satisfies LCP. This follows from the well-known facts that Gaussian random vectors satisfy LCP and applying a Lipschitz function to a distribution satisfying LCP yields a distribution that also satisfies LCP. **3. How do I interpret quantities like $t$ and $\theta$ in theorem 2?** It does not seem to be possible to explain what $\theta$ and $t$ are in plain English, but these are certain scalars determined solely by the covariance matrix $R_x$ of the data. For example, if $R_x = r_0I_d$, then the expressions simplify to $t = ({\frac{d-n}{d}})^2$ and $\theta = \frac{2\sqrt{n}}{r_0 (d-n)}$. --- Rebuttal Comment 1.1: Comment: Thank you for detailed responses to all the questions, in particular, the clarifications for definition 2 and theorem 1 were helpful. Regarding my other concerns, I understand that the target samples are included in the term $\kappa$. However, my broader question is whether interesting results can come out of explicitly modeling the source data as opposed to starting from a pre-trained solution $w_0$. For example, if you look at previous work on transfer learning, for example Ben-David et al. [1], the generalization bounds explicitly model the source distributions which leads to defining a distance between tasks (the $H \Delta H$ divergence). Modeling the source data, can you tell *how* to arrive at a good initialization $w_0$. The existing setup is certainly interesting but relates directly to model initialization than to a typical theoretical setup for transfer learning. I will stay at my current score since I do not see immediate connections to explain or answer questions for deep networks (network don't always operate in the linear regime with NTKs). However, I find the lens of universality to be different from prior theoretical work on transfer learning and support the acceptance of this work. [1] Ben-David, Shai, et al. "A theory of learning from different domains." Machine learning 79 (2010): 151-175.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness
Accept (poster)
Summary: This paper first makes two observations that neurons exhibiting significant weight changes during clean unlearning also tend to play crucial roles in poison unlearning, and neurons in the backdoored model are always more active compared to those in the clean model. The authors showcase on commonly used backdoor attacks and provide further explanations for these two observations. Based on these two observations, a model is first unlearned on clean data where weights with the highest changes are re-initialized, and then the model is optimized by activeness-aware fine-tuning. Extensive experiments are provided to support the proposed defense. Strengths: The analysis is clear and well-formulated, based on which the proposed defense is effective and easy to interpret. Extensive experiments on different types of backdoor attacks are provided. Weaknesses: The design principle of the proposed method (TSBD) is very similar to RNP [22]. Both TSBD and RNP follow the two-stage defense setup, including clean unlearning and recovering. The performance of TSBD is also similar to RNP except for the SIG backdoor. The authors provide some explanations about the difference between RNP and TSBD, but they are not convincing. The core technical difference can be further clarified. Table 3 provides the ablation study on the zero reinitialization ratios, indicating that reinitialization is a sensitive parameter. In a practical threat scenario, how would the defender select a proper reinitialization ratio? It is also suspicious whether the neuron ratio selection plays a key role in increasing the effectiveness. Closely related work [a] is missing, where clean unlearning by mask optimization is also discussed. [a] Towards reliable and efficient backdoor trigger inversion via decoupling benign features. In ICLR, 2024 Technical Quality: 2 Clarity: 3 Questions for Authors: My main concern is the difference between TSBD and RNP, as well as the practicality of TSBD as a defense. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors did not provide analysis on limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer kmL5, thank you very much for your careful review of our paper and thoughtful comments. We are encouraged by your positive comments on our **well-formulated analysis** and **effective method**. We hope the following responses could help clarify the potential misunderstanding and alleviate your concerns. ___ **W(Weakness)1: Concern about the differences compared with RNP.** **RW(Response to Weakness)1:** We appreciate your concern about the differences compared with RNP. We would like to clarify that **TSBD is a newly designed method with several fundamental differences compared with RNP**. Please refer to **R1 of Author Rebuttal** for more detailed comparisons. ___ **W2: Concern about the practicality of the proposed method.** **RW2:** Thank you for concerning about the practicality of TSBD. We would like to point out that the purpose of Table 3 in Section 4.3 is to compare the effectiveness of different reinitialization schemes, where a fixed neuron ratio $n\%$ (e.g., 10%) is used for the three versions. Specifically, let's denote the top-$n\%$ selected neurons as $\theta^n$. $V_1$ means that $\theta^n$ are reinitialized thoroughly; $V_2$ means that only top-70% subweights of each neuron in $\theta^n$ are reinitialized; $V_3$ means that the top-70% subweights of the whole $\theta^n$ are reinitialized. Therefore, Table 3 in Section 4.3 indicates that some subweights in the selected neurons are important to the clean functionality, and thus we should keep them properly. The sensitivity analysis on neuron ratio is illustrated in **Section 4.4** and **Appendix G**, which prove that the hyperparameter $n$ is **insensitive but important** to the final performance. In a practical scenario, we can set $n$ freely from 10% to 70% (see **Figure 8 of Appendix G**). For a more detailed analysis of the importance of stage 1, please refer to the **R2 of Author Rebuttal**. ___ **W3: Suggestion for additional related work [1].** **RW3:** Thanks for providing us with this valuable information. We will add this paper (**BTI-DBF**) to the related work in the revised version. Here, we provide a brief version of *unlearning for backdoor defense* containing this paper as follows: > *Model unlearning* can be considered as an opposite process against learning, aiming to remove the impact of a training subset from a trained model [2]. In the field of backdoor defense, unlearning the possible poisoned data (i.e., *poison unlearning*) is an effective way to remove the learned backdoor. NC [3] and BTI-DBF [1] try to generate the possible poisoned data with either trigger inversion or poison-data generator; ABL [4] and D-BR [5] focus on filtering out the poisoned data from the training dataset according to their attributes during training; i-BAU [6] and SAU [7] assume the adversarial perturbation as a type of trigger and generate poisoned data with adversarial example. To avoid inducing bias, recent work tries to directly unlearn the available clean data (i.e., *clean unlearning*) for defense. RNP [8] finds that a clean-unlearned model can help expose the backdoor neurons for the subsequent pruning-mask learning. Different from the existing works that focus on utilizing the unlearning techniques, we fill the gap of exploring the *clean* and *poison unlearning* process on the backdoored model and provide insights from their correlation for defense. ___ [1] Towards reliable and efficient backdoor trigger inversion via decoupling benign features. ICLR 2024. [2] Machine unlearning. Symposium on Security and Privacy (SP) 2021. [3] Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. Symposium on Security and Privacy (SP) 2019. [4] Anti-backdoor learning: Training clean models on poisoned data. NeurIPS 2021. [5] Effective backdoor defense by exploiting sensitivity of poisoned samples. NeurIPS 2022. [6] Adversarial unlearning of backdoors via implicit hypergradient. ICLR 2022. [7] Shared adversarial unlearning: Backdoor mitigation by unlearning shared adversarial examples. NeurIPS 2023. [8] Reconstructive neuron pruning for backdoor defense. ICML 2023. --- Rebuttal 2: Comment: Dear Reviewer kmL5: We would like to express our sincere gratitude for your valuable insights and suggestions on our work. We have tried our best to address the concerns and queries you raised during the rebuttal process. However, we would greatly appreciate knowing **whether our responses have effectively resolved your concerns**. Your feedback will be instrumental in improving the quality of our work. Sincerely, Authors --- Rebuttal Comment 2.1: Comment: Thank the authors for your efforts. Most of my concerns are addressed. --- Reply to Comment 2.1.1: Comment: Dear Reviewer kmL5: We deeply appreciate your thoughtful feedback and the effort you have put into reviewing our paper. Your suggestions will be taken into account in the revised version. Thank you for your comprehensive review and positive feedback. Sincerely, Authors
Summary: The authors propose a novel two-stage backdoor defense method TSBD. The proposed method is based on two key observations 1) the weight changes of neurons during clean and poison unlearning are correlated, 2) the backdoored neurons exhibit a larger gradient norm during unlearning. Respectively, the proposed defense method consists of 1) reinitializing neurons with high weight changes, 2) fine-tuning under gradient-norm regularization. The proposed method is compared with several state-of-the-art defense methods on different attacks, and a comprehensive ablation study is conducted. Strengths: 1. The paper is well-written and easy to follow. The figures are very clear. 2. The observations are interesting, providing insights into this field. 3. The proposed method is effective across different attacks, showing low attack successful rate and good defense effective rate. The authors provide sufficient and convincing results. Weaknesses: 1. How is the performance on scaled-up experiments, e.g. ViT, or ImageNet? 2. What is the computational overhead of the proposed method compared to others? Technical Quality: 4 Clarity: 4 Questions for Authors: Elaborated above. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors discuss the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer obuU, thank you very much for your positive appraisal and great interest in our paper. We are encouraged by your positive comments on our **good paper presentation**, **insightful observations**, and **convincing evaluations**. We hope the following responses can help answer your questions. ___ **W(Weakness)1: Performance on scaled-up experiments.** **RW(Response to Weakness)1:** We appreciate your interest in the scaled-up cases in TSBD. To further verify the **scalability** of our method, we evaluate its performance on a ViT-b-16 model with the CIFAR10 dataset as follows: **Table 1: Experimental Results for ViT-b-16** | Method | No Defense |No Defense|CLP|CLP|ANP|ANP|TSBD|TSBD| |--|--|--|--|--|--|--|--|--| | Attack | ACC | ASR|ACC|ASR|ACC|ASR|ACC|ASR |Input-aware|91.65|92.30|90.55|79.34|90.40|50.69|86.11|**4.43**| |WaNet|89.12|80.95|89.12|80.95|89.12|80.95|88.43|**1.59**| Note that the other settings follow the basic ones in section 4.1, e.g., the poisoning ratio is set to 10% and the target label is set to 0. The results demonstrate that **TSBD performs effectively on the scaled-up model**, achieving a low ASR and acceptable ACC. In contrast, CLP and ANP fail completely with ASR still at a high level, particularly for the WaNet attack. Due to time constraints, we postpone the testing on large-scale datasets to future work. ___ **W2: What is the computational overhead?** **RW2:** Thanks for your interest in the computational overhead of TSBD. We would like to emphasize that **TSBD is an effective and efficient method** that can defend against backdoor attacks with acceptable overhead. To support our statement, we now show the average computational time of each defense step in Table 2, including *Clean Unlearning*, *NWC Calculation*, *Zero Reinitialization*, and *Activeness-Aware Fine-Tuning*. We follow the same experimental setting as in section 4.1, using PreAct-ResNet18 with a 10% poisoning ratio. The experiments here are conducted on a server with A6000 GPU and AMD EPYC 7543 32-Core Processor CPU. We observe that the main computational overhead lies in the fine-tuning process. In contrast, the time required for clean unlearning does not increase proportionally with dataset complexity. This means that **TSBD is as efficient as other fine-tuning-based methods**. Moreover, we present a practical runtime comparison with other SOTA defenses in Table 3, including the loading and testing time needed in practice. As we can see, **TSBD is faster than most of the existing methods**. **Table 2: Computational Time of Each Defense Step of TSBD** | Defense Step | CIFAR-10 |Tiny ImageNet| |--|--|--| | *Clean Unlearning* | 20.84s |17.90s| | *NWC Calculation* | 0.03s |0.03s| | *Zero Reinitialization* | 1.34s |1.29s| | *Activeness-Aware Fine-Tuning* | 21.08s |174.36s| **Table 3: Practical Runtime Comparison on BackdoorBench** | Dataset | FT |FP|ANP|NC|RNP|TSBD| |--|--|--|--|--|--|--| | CIFAR-10 | 358s | 855s| 505s| 733s| **123s**|159s | | Tiny ImageNet | 1649s | 20429s| 2578s| 37101s| 285s|**269s** |
Summary: The paper addresses the security threat posed by backdoor attacks in deep neural networks (DNNs). The authors explore model unlearning from the perspective of weight changes and gradient norms, making two key observations: weight changes between poison and clean unlearning are positively correlated, and neurons in backdoored models are more active than those in clean models. Based on these observations, they propose a Two-Stage Backdoor Defense (TSBD) method, involving Neuron Weight Change-based Backdoor Reinitialization and Activeness-Aware Fine-Tuning. Extensive experiments demonstrate the superior performance of their method compared to state-of-the-art approaches. Strengths: 1. The paper introduces a novel perspective on backdoor defense by exploring the correlation between weight changes in poison and clean unlearning and the activeness of neurons. This approach provides new insights into identifying and mitigating backdoor vulnerabilities. 2. The paper is well-organized and clearly presents its methodology, findings, and contributions. 3. The proposed TSBD method is rigorously evaluated through extensive experiments involving eight backdoor attacks on three benchmark datasets. Weaknesses: 1. The TSBD method involves additional steps such as clean unlearning, neuron weight change calculation, and activeness-aware fine-tuning, which may introduce computational overhead. An analysis of the computational cost and efficiency of the proposed method compared to existing defenses would be beneficial. 2. What causes the clean unlearning NWCs to exhibit a positive correlation with those in poison unlearning? I think this question has not been solved clearly. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. When selecting the clean dataset, how is the class distribution handled? Is it randomly selected or fully covered? Intuitively, would data related of the target label cause particularly high activation values? 2. I think the NWC strategy is not significantly different from the pruning method in RNP[1], and the Activeness-Aware Fine-tuning is also similar to the fine-tuning in RNP. Therefore, I hope the authors can emphasize more clearly the differences between their approach and RNP. 3. From the observations, it is apparent that there is a difference in neuron activation between clean and backdoored models. However, the use of gradient norm restriction in the unlearning process is applied to an already modified model (not the original backdoored model). The direct correlation between this and enhanced defense effectiveness does not seem sufficiently clear and is not within the scope of the initial observations. 4. Regarding the selection of hyperparameters in Activeness-Aware Fine-tuning, I believe this can significantly impact the experimental results, but there is no discussion on this aspect in the paper. [1] Li Y, Lyu X, Ma X, et al. Reconstructive neuron pruning for backdoor defense[C]//International Conference on Machine Learning. PMLR, 2023: 19837-19854. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the paper still has some limitations that the authors didn't mention. Please refer to my questions and weaknesses. Please add some discussions about data-free methods Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer P1hB, thank you very much for your careful review of our paper and thoughtful comments. We are encouraged by your positive comments on our **novel and insightful method**, **good paper presentation**, and **rigorous evaluation**. We hope the following responses can help alleviate your concerns. ___ **W(Weakness)1: Suggestion for the analysis of computational cost.** **RW(Response to Weakness)1:** Thanks for your constructive suggestion. We would like to refer you to the **RW2 of Reviewer obuU** for a comprehensive analysis, which emphasizes that **TSBD is an effective and efficient method**. ___ **W2: Why are clean-unlearning NWCs positively correlated with those in poison unlearning?** **RW2:** Thanks for your in-depth question on the observation 1. This question has been discussed and analyzed from the perspective of neuron activation in **Section 3.3**. Briefly, **the neurons with higher *poison activations* are the main targets to be changed in both *clean* and *poison unlearning*.** They tend to decrease the poison activation during poison unlearning, while increasing it during clean unlearning, keeping the clean activation nearly unchanged (see **Figure 3**). We also deduce that the weight change on a neuron positively influences its changes in clean and poison activation. Therefore, no matter clean or poisoned data are used for unlearning, those neurons with higher poison activations in the backdoored model are changed more in weights than the others, as reflected in the positive correlation of neuron weight changes (see **observation 1 of Figure 1**). ___ **Q(Question)1: Concern about clean-data selection and class distribution.** **RQ(Response to Question)1:** Thanks for your interest in the clean dataset used for defense. We strictly follow the same data setting in BackdoorBench [1] for a fair comparison with the baselines. Specifically, 5% of clean data are **randomly selected** from the unpoisoned dataset with no manipulation on the class distribution. Moreover, to answer the question of the correlation between target-label data and the activation values, we now average the activation values of the last convolutional layer on each class of CIFAR-10 for comparison. For each class, we randomly select 10 samples on each class as the input, and capture the output of the target layer with an additional *ReLU* activation function. The results are illustrated in Table 1 (as below), which shows that there is **no obvious relationship between the target-label data and the activation**. **Table 1: Average activation for the last convolutional layer on each class of CIFAR-10** | Class $\rightarrow$ | 0 (target class) |1|2|3|4|5|6|7|8|9| |--|--|--|--|--|--|--|--|--|--|--| | Avg. Activation | 0.2212 |0.2146|0.2126|0.2175|0.2229|0.2231|0.2140|0.2177|0.2161|0.2186| ___ **Q2: Concern about the differences compared with RNP.** **RQ2:** Thanks for your concern about the differences compared with RNP. We would like to refer you to **R1 of Author Rebuttal** for a comprehensive comparison, which clarifies that **TSBD is a newly designed method with several fundamental differences compared with RNP**. ___ **Q3: Concern about the gap between observation 2 and loss regularization on gradient.** **RQ3:** Thanks for your in-depth concern. The main idea we want to convey in observation 2 of Figure 1 is that **for an arbitrary clean model, after it has been attacked by a backdoor, the corresponding neurons will become more active than before**, i.e., they will exhibit a larger gradient norm during learning processes. **This is a general phenomenon, not limited to the initial backdoored model**. Therefore, for the Activeness-Aware Fine-Tuning in stage 2, we add an additional gradient norm regularization in the loss function to encourage a low gradient norm status for the optimized model after fine-tuning, which is considered closer to a clean model. **This purpose has no relationship with the intermediate model status after the zero reinitialization.** ___ **Q4: Suggestion for the hyperparameters experiment on Activeness-Aware Fine-tuning .** **RQ4:** Thanks for your constructive suggestion. As the hyperparameters $r$ and $\alpha$ have been well-discussed for their influence on approximation performance in the original paper [2], and since they are only indirectly related to the backdoor, we follow the suggested settings, i.e., $r=0.05$ and $\alpha=0.7$. Here, we provide the tuning results following the tuning range in [2] under our experimental settings. These results were obtained on a BadNets-attacked PreAct-ResNet18. We observe that **the performance is insensitive (Changes <2% in ACC and <1% in ASR) across different hyperparameter settings**, maintaining a high level of performance. **Table 2: Hyperparameters Tuning for Activeness-Aware Fine-Tuning** |$r$| $\alpha$ |ACC | ASR| |--|--|--|--| | 0.05 | 0.7 |90.72|1.31| | 0.01 | 0.7 |90.68|1.30| | 0.02 | 0.7 |91.03|1.50| | 0.1 | 0.7 |90.90|1.30| | 0.2 | 0.7 |89.64|1.04| | 0.05 | 0.1 |91.32|1.40| | 0.05 | 0.3 |91.24|1.59| | 0.05 | 0.5 |91.00|1.63| | 0.05 | 0.9 |90.68|1.26| ___ [1] BackdoorBench: A Comprehensive Benchmark of Backdoor Learning. NeurIPS 2022 Datasets and Benchmarks Track. [2] Penalizing Gradient Norm for Efficiently Improving Generalization in Deep Learning. ICML 2022. --- Rebuttal Comment 1.1: Title: Concerns about the generability of proposed shceme Comment: Thanks for considering my concerns and I see the authors have provided more evaluations for validating its feasibility and effectiveness. An additional minor question is that How compatible is this NWC-based Backdoor Reinitialization approach to "DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation", AsiaCCS and "SupRTE: Suppressing Backdoor Injection in Federated Learning via Robust Trust Evaluation". Besides, it is already difficult to innovate in this direction, and the limitations of NWC-based Backdoor Reinitialization methods should be discussed. --- Reply to Comment 1.1.1: Comment: Dear Reviewer P1hB, Thanks for your further feedback. We hope the following responses can address your concerns: - **About the compatibility of NWC-based Backdoor Reinitialization**. As illustrated in **Appendix F**, our proposed NWC has great compatibility to other methods, e.g., FP [1], for a better defense. We would clarify that NWC is also compatible to your mentioned defense methods: - **DHBE** is a data-free defense aiming to distill a clean student model from a backdoored teacher using two adversarial processes [2]. Our NWC is compatible to the generated samples, since it shows similar behavior no matter clean or poisoned data is used. Therefore, we can replace the *adversarial backdoor regularization* with our *NWC-based backdoor reinitialization*. Specifically, we can clean the backdoored teacher as a new benign teacher using NWC reinitialization, and treat it as the regularization term to inform the learning of the student model. In this case, the student model can learn the high ACC from the backdoored teacher, as well as the low ASR from the benign teacher. Besides, the gradient norm regularization in stage 2 can also be used in the loss function to inform the student model closer to a clean model during distillation. - **SupRTE** is a defense method specially designed for the *Federated Learning* scenario, which extracts the behavior representations from different clients and assigns scores for further weighting [3]. Our NWC from stage 1 and gradient norm regularization from stage 2 can be used as additional behavior representations for scoring. For example, clients with unusually high average NWCs and gradient norms compared to others can be recognized as potentially malicious. - **About the limitations.** We would like to point out that the limitations of our method have been discussed in the **Conclusion Section**. We consider the major concern on *clean-data accessibility*, which may limit its applicability to real-world scenarios. Then, we mention the potential solutions using data generation and data-free techniques. For more limitations on performance, we have also conducted a comprehensive evaluation in the paper. As discussed in **Appendix H**, our method is less effective under a low poisoning ratio, e.g., 1%, and we further find a possible reason for the less obvious weight change in backdoored neurons. In future work, we plan to further investigate and explore the NWC for data-free solutions and enhance the low-poisoning-ratio scenarios. Thanks again for your valuable time and constructive comments. Sincerely, Authors ___ [1] Fine-pruning: Defending against backdooring attacks on deep neural networks. RAID 2018. [2] DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation. Asia CCS 2023. [3] SupRTE: Suppressing Backdoor Injection in Federated Learning via Robust Trust Evaluation. IEEE Intelligent Systems 2024.
Summary: The paper introduces two key observations in backdoored models, presenting a two-stage backdoor attack defense method, where the two observations are the following: a strong positive correlation between weight changes in poison and clean unlearning, and the stronger neuron activation in backdoored models compared to clean models. The proposed two-stage defense method leverages these observations by reinitializing a certain proportion of backdoor-related neurons and subweights, and by suppressing gradient normalization during the fine-tuning process. The proposed method demonstrates the state-of-the-art performance on the selected datasets and models with a wide range of ablation studies. Strengths: 1. The paper introduces a novel approach to backdoor defense. It provides two insights in the weights of backdoor models. Based on this, the authors introduce a two-stage defense mechanism. Also, the use of clean unlearning that does not require poisoned data to prevent the backdoor attack. 2. While the scope of the experiment is limited in model architectures and datasets, the experiment shows superior performance. The experimental setup is thorough, covering a certain range of backdoor attacks. 3. The paper is well-organized, with figures and tables are used to illustrate key observations and experimental results. The descriptions of the observations and the proposed defense method are detailed and precise. Weaknesses: 1. The scale of experiments are too limited. While the authors compare the results between 8 other backdoor attack methods, the main experiments are only conducted to specific datasets and models (CIFAR-10, PreAct-ResNet18). This makes the generalizability of the results rather unclear. 2. The authors mention computational efficiency in Activeness-Aware Fine-Tuning by using an approximation scheme, but there are no experiments demonstrating the trade-off between efficiency and accuracy. Including such experiments would provide a more comprehensive understanding of the method's practical implications. 3. The concept of unlearning in backdoor attacks could be better explained in the Related Works section. Given its importance in this paper, a more detailed discussion would help contextualize the proposed method and highlight its significance in addressing existing challenges. 4. There is no mathematical proof. While it employs various loss functions and approximation techniques, and the empirical results are quite promising, there is no analysis, explaining why the proposed method works. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. There is no statement or analysis about scalability and generalizability. Can you provide more detailed information about these, to different datasets and model architectures? 2. How can the clean unlearning process identifies and affects backdoor related neurons, without causing degradation of the models’ performance on clean data? 3. The section introducing the effectiveness of zero reinitialization on subweights in 4.3 Ablation Studies is a bit confusing. The difference between V2 and V3 is still not clear to me. 4. I have questions about the insensitivity of TSBD to both neuron ratio and weight ratio as explained in Section 4.4. While I understand that the method effectively selects backdoor-related neurons indicated as active neurons and that reinitializing them is effective, it is confusing that the performance remains similar across both low and high ratios. Is reinitializing itself the important part? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors mention in the introduction that clean unlearning is still underexplored, but it is unclear if this limitation has been fully addressed in the paper. Additionally, the paper lacks a discussion of its own limitations. Providing a clear discussion on the limitations of the proposed method and potential areas for future research would enhance the paper. The authors acknowledge that there remain challenges in fully mitigating backdoor effects without any access to clean data. They suggest that data generation techniques and data free techniques might offer potential solutions. There is no discussion about the scalability of their proposed method, especially in large scale or real time applications. Addressing the computational overhead and resource requirements would be necessary for practical applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Vmzu, thank you very much for your careful review of our paper and thoughtful comments. We are encouraged by your positive comments on our **novel and insightful method**, **superior performance**, **thorough experimental setup**, and **good paper presentation**. We hope the following responses could help clarify the potential misunderstanding and alleviate your concerns. ___ **W(Weakness)1&Q(Question)1: Concern about the scale of main experiments.** **R(Response)1:** We appreciate your concern about the scale of the main experiments. We would like to clarify the following points: - Our main experiments are not only conducted on CIFAR-10, PreAct-ResNet18, but also cover the datasets of **Tiny ImageNet, GTSRB,** and the model of **VGG19-BN**. Please refer to **Section 4.2, Appendix D and E** for the details. - In this paper, we follow a similar testing range on models and datasets as in the previous SOTA works [1,2], which is considered sufficient to prove the generalizability of the results. To further prove the **generalizability** and **scalability**, we also evaluate our method on CIFAR-100 (as follows) and ViT-b-16 (due to the space limit, please refer to **RW1 of Reviewer obuU**). Note that we keep other settings the same as in section 4.1. The results show the superiority of TSBD. **Table 1: Experiment Results for CIFAR-100 on PreAct-ResNet18** |CIFAR-100| No Defense |No Defense|i-BAU|i-BAU|TSBD|TSBD| |--|--|--|--|--|--|--| | Attack | ACC |ASR|ACC|ASR|ACC|ASR| | BadNets | 67.22 |87.43|60.37|0.04|66.28|0.33| | Input-aware | 65.24 |98.61|65.21|85.14|69.67|0.18| ___ **W2: Suggestion for the experiment on computational efficiency of Activeness-Aware Fine-Tuning.** **RW(Response to Weakness)2:** Thanks for your interest in the computational efficiency. In fact, as pointed out in [3], it is infeasible to conduct such an experiment for DNNs since it involves calculating a Hessian matrix (refer to **Appendix B**) without approximation, where the *time* and *space complexity* are $O(n^2)$ theoretically. We now provide a brief example on *PreAct-ResNet18*, which contains $n\approx11$ million parameter units. Its calculated Hessian matrix will contain $n^2 \approx124,794$ billion units, which is much larger than a LLaMa 65B and fails to be calculated on one GPU, e.g., A6000 GPU with 49G memory. ___ **W3: Suggestion for adding unlearning to related work.** **RW3:** Thanks for your valuable suggestion. We will update some related works on unlearning-based backdoor defense to highlight our contribution in the revised version. Due to the space limit, a brief version is provided in the **RW3 of Reviewer kmL5**. ___ **W4: Suggestion for mathematical proof on explaining why TSBD works.** **RW4:** Thanks for your in-depth suggestion. We have explained the functionality of those important techniques in **Section 3**. For example, "Suggestions" in Section 3.2 clarify why we need both two stages. Additionally, in **Section 4**, we empirically validate the effectiveness of each component. In fact, except for the empirical findings, we also attempted to mathematically derive the observations 1 and 2. However, this seems to be a very difficult task. For example, to prove observation 1, we need to estimate the NWC values, which occur in weight changes during the whole unlearning process. It involves estimating the total changes on a variant (e.g., $x$) for $K$ steps of gradient descents, i.e., $\||x_K-x_1\||=\||\sum_{t=1}^{K-1}(x_{t+1}-x_{t})\||$. As far as we know, **in *optimization theory*, there exists no mathematical tool to estimate this quantity directly;** instead, more focus is placed on estimating the distance between $x_{t}$ and the limit point. In future work, we will continue to explore this issue. ___ **Q2: Explanation for the clean-unlearning capability.** **RQ(Response to Question)2:** Thanks for your question. We would like to point out that the *clean unlearning* in stage 1 will degrade the clean performance. While **the reinitialization is conducted on the original backdoored model**, clean degradation on the unlearned model will not affect the final performance. Based on **observation 1 in Figure 1**, we find that the backdoor-related neurons that change the most in weight during poison unlearning also change significantly during clean unlearning. Therefore, we define the neuron weight change to identify them, and then remove them from the backdoored model. For how unlearning affects the neurons, we offer insights from the perspective of neuron activations in **Section 3.3**. We would like to refer you to the **RW2 of Reviewer P1hB** for a brief summary. ___ **Q3: Explanation for the different model versions in ablation study.** **RQ3:** Thanks. Due to the space limit, we would like to refer you to the **RW2 of Reviewer kmL5** for the explanation of the differences among these three versions. ___ **Q4: Concern about the importance of reinitialization.** **RQ4:** We appreciate your concern about the importance of stage 1. While it exhibits a stablely good performance across different ratios in BadNets, **we cannot overlook its contribution to backdoor removal in some strong attacks**, e.g., Blended, LF, and SSBA, where ASR larger than 20% after defense (see **Figure 8 of Appendix G**). For more details, we would like to refer you to the **R2 of Author Rebuttal**, where we **validate the importance of stage 1** by comparing TSBD with the version containing only stage 2. ___ [1] Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples. NeurIPS 2023. [2] Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features. NeurIPS 2023. [3] Penalizing Gradient Norm for Efficiently Improving Generalization in Deep Learning. ICML 2022. --- Rebuttal 2: Comment: Dear Reviewer Vmzu: We would like to express our sincere gratitude for your valuable insights and suggestions on our work. We have tried our best to address the concerns and queries you raised during the rebuttal process. However, we would greatly appreciate knowing **whether our responses have effectively resolved your concerns**. Your feedback will be instrumental in improving the quality of our work. Sincerely, Authors --- Rebuttal 3: Comment: Dear Reviewer Vmzu: Thanks again for your thoughtful comments. As the end of the discussion period is approaching, we would like to kindly ask again for your concerns about our paper. Are there still any unsolved doubts for you? Your help is greatly appreciated. We are eagerly waiting for your feedback before the end. Sincerely, Authors
Rebuttal 1: Rebuttal: ## General Response We sincerely thank all reviewers for their valuable time and constructive comments. ___ **Q1: Systematic comparison with RNP [1].** **R1:** We aim to address the concerns regarding the differences between our work and RNP. More precisely, we emphasize their differences from the perspectives of *technical details*, *motivation&insight*, and *experimental performance*, as follows: - **Different technical details.** - **The backdoor erasing techniques are different.** RNP utilizes *pruning* to erase the neurons permanently, while TSBD proposes *zero reinitialization* in the subweight level to modify the weights of neurons. By using the *zero reinitialization*, the modified neurons can be further repaired by the subsequent fine-tuning and better recover the clean functionality, which is infeasible in pruning. - **The goals are different for the second stage.** For RNP, as stated in [1], the purpose of *Filter Recovering* is to "*recover the clean features (features of the clean samples) erased by the previous unlearning step*", where a pruning mask is learned in this stage and backdoor erasing has not yet been conducted. However, for TSBD, our goal is to recover the clean functionality lost during erasing the backdoor effect by reinitializing the backdoor-related neurons, where fine-tuning is a commonly-used technique [2]. The lost of clean features comes from the *zero reinitialization* and has no direct relationship with unlearning. - **The target models are different in the second stage.** For RNP, the *Filter Recovering* is conducted based on the **unlearned model** from the previous stage, i.e., *Neuron Unlearning*. Differently, for TSBD, the unlearning stage is only to help find out the backdoor-related neurons, while the fine-tuning stage is based on the **reinitialized model** from the **original backdoored one**, not the unlearned model from the first stage. - **The subsequent operations are different after erasing backdoor.** The two stages in RNP are to finally find out and prune the backdoor neurons, and no further operation is conducted. For TSBD, after removing backdoor with *zero reinitialization* (as shown in Section 4.3, where the ASRs reduce to 0), a further fine-tuning is conducted to recover the sacrificed ACC. Moreover, we adopt a novel gradient norm regularization to enhance this process. - **Different motivations and insights.** - **The motivations are different for conducting clean unlearning.** RNP aims to utilize the characteristics of a clean-unlearned model for mask learning and improving other techniques [1], i.e., exploring the utility of the unlearned model. In contrast, TSBD aims to explore the characteristics of the unlearning process, including the ones in different input data types and model types. The only reason for unlearning clean data is that it is an accessible data type for defense. - **The insights are from different perspectives.** For RNP, the authors state that "*the unlearned model tends to predict the backdoor label for all defense samples*" and it can be used to improve other defenses [1]. For TSBD, we emphasize the positive weight-change correlation of *clean* and *poison unlearning*, and we also uncover the neuron activeness of backdoored model. These insights are compensatory for a better understanding of backdoor learning. We believe that RNP and TSBD can both contribute to the community. - **Difference performances.** - **TSBD outperforms RNP in most cases.** For a fair comparison, we adopt the RNP method in the BackdoorBench framework and follow the basic experimental setup as other baselines; refer to **Appendix C** for more details. For all the main experiments illustrated in **Section 4.2**, **Appendix D and E**, TSBD outperforms RNP with the best average performance, e.g., for CIFAR-10, 97.09 (TSBD) > 82.83 (RNP) on DER; for Tiny ImageNet, 97.89 (TSBD) > 87.59 (RNP) on DER, etc. - **TSBD is more robust in clean data ratio than RNP.** Although RNP claims that only 1% clean data is needed for defense, it is not robust to the clean data ratio, which is validated in the **Appendix D.9 of [3]**. In contrast, TSBD is validated to be robust in the clean data ratio (see **Appendix I**). In conclusion, **our TSBD is a newly designed method with several fundamental differences compared with RNP**. Our paper provides several important insights to the research field of backdoor learning and the empirical results validate that TSBD is an effective defense method. ___ **Q2: Concern about the importance of stage 1.** **R2:** We aim to emphasize the importance of stage 1 for the final performance. As shown in **Figure 8 of Appendix G**, by reducing the neuron ratio to 1%, TSBD fails on several strong attacks, with ASR larger than 20%, which may indicate a bad performance without reinitialization. Here, we design a stage 2 only version, i.e., *Activeness-Aware Fine-Tuning* (**AaFT** for short), and compare it with the full version (TSBD) in **Table 1 (as follows)**. The results show that TSBD is more effective towards most attacks and **validates the importance of reinitialization for a successful defense**. **Table 1: Comparison between **AaFT** (only stage 2) and **TSBD** (the full process)** |Method| AaFT |AaFT|TSBD|TSBD| |--|--|--|--|--| | Attack | ACC |ASR|ACC|ASR| | BadNets | 90.58 |**1.26**|90.72|1.31| | Blended | 91.70 |20.52|91.61|**2.61**| | Input-aware | 93.04 |**1.14**|93.06|1.94| | LF | 92.07 |6.07|91.20|**2.64**| | SIG | 90.01 |2.70|90.41|**1.27**| | SSBA | 91.61 |31.97|91.57|**1.66**| | Trojan | 91.71 |9.34|91.76|**5.06**| | WaNet | 93.04 |1.21|93.26|**0.88**| ___ [1] Reconstructive Neuron Pruning for Backdoor Defense. ICML 2023. [2] Fine-pruning: Defending against backdooring attacks on deep neural networks. RAID 2018. [3] Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples. NeurIPS 2023. Pdf: /pdf/ee7936e0f97893efe4ab52838f8b1f4a1d0be495.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Robust group and simultaneous inferences for high-dimensional single index model
Accept (poster)
Summary: This paper studies the high-dimensional single index model (SIM), which takes the form $Y=g(X^T\beta, \epsilon)$ with $\epsilon$ and $X$ being orthogonal. Although this model has flexibility and interpretability, its efficiency is adversely affected by outlying observations and heavy-tailed distributions. The paper improves this in the following 3 aspects: (1) they extend the rank-LASSO procedure to include both convex and non-convex penalties and establish error bound of any local optimum of the empirical objective; (2) they provide asymptotically honest group inference procedures based on the idea of orthogonalization for testing the joint effect of many predictors; (3) they develop a multiple testing procedure for determining if the individual coefficients are relevant simultaneously, and show that it is able to control the FDR asymptotically. Strengths: Provides new ideas that deal with the inefficiency of single index model due to outlying observations and heavy-tailed distributions. Weaknesses: I think the paper is quite well-written. The only issue may be that the authors can provide more about the background of the problem, Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for your valuable comments. ## Weaknesses The group inference is helpful to decide whether a group of predictors are important or not for the response. If we find a group of predictors are important, we would like to know which specific predictors in the group are significant. For this aim, our developed multiple testing procedure is useful. For instance, researchers may aim to test whether a gene pathway, consisting of high-dimensional genes for the same biological function, is important for a certain clinical outcome, given the other high-dimensional genes. When determining that a certain gene pathway is important, researchers need to further identify specific genes within the pathway which are important for a certain clinical outcome. According to your helpful suggestion, we will add a real data analysis in the appendix of the revision. For your convenience, we also copy the real data analysis in the following: We apply our methods on a dataset about riboflavin (vitamin B2) production rate with Bacillus Subtilis. This dataset is made publicly by Buhlmann et al. (2014) and has been analyzed by many authors, for instance Meinshausen et al. (2009), Van de Geer et al. (2014), Javanmard and Montanar (2014), and Fei et al. (2019). The dataset riboflavin can be obtained from the R package $\texttt{hdi}$. It consists of $n = 71$ observations of strains of Bacillus Subtilis and $p = 4088$ covariates, measuring the log-expression levels of 4088 genes. The response variable is the logarithm of the riboflavin production rate. Our goal is to detect which genes are associated with riboflavin production rate. Like most existing studies, we first reduce ultrahigh-dimension to a moderate high-dimension. Here we pick out first 300 genes by distance correlation based screening Li et al. (2012). We first conduct global testing on these 300 genes. The $p$-value of our group inference procedure is 1.29e-04, indicating that the null hypothesis is rejected and the selected 300 genes are influential for riboflavin production rate. Next, we further use FDR control procedure to select the important genes in these 300 genes. By implementing our proposed FDR control procedure with the FDR level of 0.1, we identify 10 genes that are significantly associated with the response. That is $G_{I}$ = {YTGB\_at, YCKE\_at, YXLE\_at, YXLD\_at, YJCJ\_at, XHLA\_at, xepA\_at, YCGO\_at, RPLP\_at, XKDS\_at}. If the FDR level is set as 0.2, 5 more genes will be selected. That is $G_{II}$ = $G_I$ $\cup$ {SPOIISA\_at, YHCB\_at, XKDI\_at, YJCF\_at, XHLB\_at}. We further conduct group inference on the selected subsets $G_{I}, G_{II}$ and their complement sets $G_I^c, G_{II}^c$. As expected, our group inference procedure finds again that $G_{I}, G_{II}$ are significant while $G_{I}^c, G_{II}^c$ are not. The corresponding $p$-values of $G_{I}$, $G_{I}^c$, $G_{II}$ and $G_{II}^c$ are 6.75e-06, 7.26e-01, 9.33e-06 and 9.37e-01, respectively. These results suggest that the genes selected by the FDR control procedure are really influential. We compare these selected genes with other methods. For example, the multi-sample-splitting method proposed in Meinshausen et al. (2009) identified YXLD\_at; Van de Geer et al. (2014) did not select any gene using the de-sparsified Lasso; Javanmard and Montanar (2014) only selected two genes: YXLD\_at and YXLE\_at and Fei et al. (2019) claimed YCKE\_at, XHLA\_at, YXLD\_at, YDAR\_at and YCGN\_at as significant. From $G_I$ and $G_{II}$, we can see clearly that the gene YXLD\_at is detected by not only Meinshausen et al. (2009), Javanmard and Montanar (2014), Fei et al. (2019), but also by our procedure. Besides, the genes YCKE\_at, YXLE\_at and XHLA\_at which are detected by Javanmard and Montanar (2014) and Fei et al. (2019), are also found by our method. Further, our procedure detects some additional important genes. --- Rebuttal Comment 1.1: Title: keep the score Comment: Thanks for the rebuttal. I tend to keep the score.
Summary: This paper proposes a robust group hypothesis testing procedure for a high-dimensional single index model based on a data-driven transformation of the response variable. The key observation is that under the the linearity condition (LC) of the predictors, a wide class of single index models can be equivalently (for testing purposes) converted into a linear model with a transformed response variable. The main contribution of this paper is the introduction of the distribution transformation (a data-driven approach) of the response variable, and derive the asymptotic distribution of the resulting test statistic. Numerical performance of the proposed method is evaluated through simulation studies. Strengths: The proposed method is interesting and potentially useful. The paper is well-written and the presentation is clear. Weaknesses: The simulation study can be improved. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The introduction of the distribution function $F_n$ in (2.7) is interesting but also a little arbitrary. I suspect that any other functions that can be written as linear combinations of functions of $Y_i$'s can be used in (2.7), for example, the kernel density estimator, the kernel smoother of $Y_i$'s. The resulting test statistic should still be asymptotically equivalent to a U-statistic. Could the authors comment on this? 2. Whenever the term "robustness" is involved, it implies loss of efficiency in some cases. In my opinion, it is important to make a transparent evaluation of the limitations of the proposed procedure. In the simulation study, one can add a comparison to the method with $h(Y)=Y$ as described in section 3.3. It will be informative if one can showcase in which cases which procedure is more powerful, and thus understand which method to use in practice. 3. In the simulation study, I would also suggest a graphical comparison between different methods by gradually increasing $p_{out}$ from $0$ to, say, $0.5$ for a more complete picture. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The simulation study can be improved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for your valuable comments. ## Weaknesses We have added more simulations to illustrate our procedures and compare them with other methods. The simulation settings are summarized in global response, and the simulated results are displayed in the attached pdf file. ## Question 1 As noted by you, there are many possible choices for the transformation function $h(Y)$. There are several reasons for us to choose the distribution function as the transformation function: * Firstly, the response-distribution transformation function is bounded. Actually with the equation (2.3), given the widely imposed sub-gaussian assumption on the predictors, any bounded transformation function $h(Y)$ would lead the transformed error term $e$ being sub-gaussian, even if the original error term $\epsilon$ in the single index model comes from Cauchy distribution. * As noted by Rejchel and Bogdan (2020), in the empirical distribution function, the term $\sum_{j=1}^n I(Y_j \leq Y_i)$ is the rank of $Y_i$. Since statistics with ranks such as Wilcoxon test and the Kruskall-Wallis ANOVA test, are well-known to be robust, this then intuitively explains why our procedures with distribution function are robust with respect to outliers in response. * The distribution function is very easy to estimate and thus our approach is straightforward to implement and understand. While the kernel density estimator, the kernel smoother of $Y_i$’s require additionally tuning bandwidth. ## Question 2 According to your valuable suggestion, we have conducted detailed comparisons with the method based on $h(Y) = Y$ as described in section 3.3. The simulation settings are summarized in global response, and the corresponding numerical results are displayed in Figure R.1 in the attached pdf file. * In Figure R.1, we compare our method with the procedure based on $h(Y)=Y$ when the error term follows the standard normal distribution. In the standard case (normal distribution error, linear model and no response polluted), the performance of test statistic with $h(Y)=Y$ is slightly better than our method with $h(Y)=F(Y)$. However, in other settings, our method with $h(Y)=F(Y)$ performs much better than method with $h(Y)=Y$. Although $h(Y)=Y$ performs well in the standard case, one cannot know the distribution of the error term or the formula of the model in practice. Thus compared to $h(Y)=Y$, we recommend using our procedure in practice. ## Question 3 We agree that a graphical comparison between different methods by gradually increasing $p_{out}$ from 0 to 0.5 would provide a more complete picture. According to your valuable suggestion, we have conducted detailed simulations for comparisons. The simulation settings are summarized in global response, and the results are displayed in Figure R.1 of the attached pdf file. We consider three transformation procedures: (1) $h(Y) = F(Y)$ (Our method); (2) $h(Y) = Y$; (3) $h(Y) = \text{sigmoid}(Y) = 1/\\{1+\exp(-Y)\\}$. We vary $p_{out}$ from $\\{0, 0.1, 0.2, 0.3, 0.4, 0.5\\}$ and the simulation results are shown in Figure R.1. * Firstly, under the null hypothesis ($G_1$), $h(Y) = \text{sigmoid}(Y)$ cannot control the type I error for Model 2, while other procedures perform well. * Secondly, under the alternative hypothesis ($G_2$), the empirical powers of other procedures decrease rapidly with the increase of $p_{out}$, while the powers of our procedure remain stable, which is particularly noticeable when $\delta = 0.5$. This finding indicates that our method has strong robustness when the responses are polluted. * Thirdly, our method performs well for both Model 1 and Model 2, indicating that our method is robust across different single-index models. ## Limitations Following your suggestions, we have added more simulations to illustrate our procedures and compare them with other methods. The simulation settings are summarized in global response, and the simulated results are displayed in Figure R.1 of the attached pdf file. --- Rebuttal Comment 1.1: Title: response to rebuttal Comment: I want to thank you for addressing my concerns. They are very helpful. Congratulations for a solid work!
Summary: The paper proposes an algorithm for group inference in single-index models with an unknown link function, that is robust to heavy-tailed noise in the responses. The central idea of the approach is based on the property of elliptical distributions that the linear input-output correlations remain along a fixed direction for any transformation of the labels. The algorithm introduces a linear estimation objective for transformed labels resulting in an estimator converging to the true parameters under sparsity assumptions. Crucially, the obtained estimator yields a robust test for group inference satisfying the orthogonality property, which relaxes the requirements of separation between zero and non-zero coefficients. The obtained test further satisfies honesty, namely the uniform convergence of Type 1 error. The work concludes with numerical tests of the proposed approach. Strengths: The paper is generally well written and provides an adequate motivation for the problem of group inference under heavy-tailed response, and the idea of orthogonalization. The proposed approach is straightforward to implement and understand. The paper provides an extensive discussion of some important properties of the test such as honesty and power. The estimation error bounds are non-asymptotic and hold for general scalings of p,n. The numerical experiments support the efficiency and robustness of the approach. Weaknesses: A few limitations of the setup are not explicitly discussed: 1) The robustness is restricted to variability in the noise of the response function, not w.r.t general perturbations to the distribution. The paper also doesn’t provide quantitative bounds on robustness. 2) The condition kappa_h \neq 0 excludes even link functions and in particular the problem of sparse phase retrieval. 3) Theorem 3.1 imposes limitations on the sparsity level that should be explicit. The results in the paper seems to be restricted to constant sparsity levels/ The choice of h=F for robustness appears to be due to F(Y) being uniformly distributed in [0,1] independent of the distribution of Y. This should be more discussed explicitly. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: Why are λX, λY not directly described in Theorem 3.1? Why is the transformation H=F particularly suitable for robustness? Is it because of the distribution of F(Y) being agnostic to the distribution of Y? For sub-exponential variables, Lemma A.11 appears to be weaker than Bernstein’s inequality. Could you clarify the difference w.r.t Bernstein’s inequality and related inequalities for random variables with bounded Orlicz norms? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Some missing discussion on limitations is described above under “weaknesses”. The work is primarily of a theoretical nature and has no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for your valuable comments. ## Weakness 1 As noticed by you, in our paper, we say our procedure is robust since our methods do not need any moment condition for the error term in the single-index model. For your concern, we further discuss the robustness of our procedure based on the tool of efficient influence function. Recall that our test procedure is inspired by the quantity $$I = \Psi(P) = E_{P}[\\{F_{P}(Y) - 1/2 - Z_j^\top\gamma_j\\}(X_j - Z_j^\top\theta_j)],$$where $P$ is distribution of $(X_j,Z_j^\top,Y)^\top$. Next we derive the efficient influence function (EIF) of $I$. Consider $$P_t=t\tilde P+(1-t)P,$$where $t\in[0,1]$, and $\tilde P$ is a point mass at a single observation $\tilde o = (\tilde x_j,\tilde z_j^\top,\tilde y)^\top$. By some calculation, the EIF for $I$ at observation $\tilde o$ is$$\phi(\tilde o,P)=\frac{d\Psi(P_t)}{dt}\vert_{t=0}=\\{F_P(\tilde y)-1/2-\tilde z_j^\top\gamma_j\\}(\tilde x_j-\tilde z_j^\top\theta_j)+E_P[\\{I(Y\geq\tilde y)-F_p(Y)\\}(X_j-Z_j^\top\theta_j)]-\Psi(P). $$ Since $I(Y\geq\tilde y)$ and $F_P(\tilde y)$ are bounded, then given $(x_j,z_j^\top)^\top$, $\phi(\tilde o,P)$ is bounded for any $\tilde y\in R$. In terms of the EIF, our test statistics are robust with respect to the perturbations in the responses. ## Weakness 2 We agree with you. The condition $\kappa_h\neq0$ does exclude even link functions and in particular the problem of sparse phase retrieval. Neykov et al. (2020) introduced a novel procedure to deal with the problem of sparse phase retrieval when $\kappa_h=0$. We will incorporate their insights in the reivision. ## Weakness 3 For consistency in $L_2$-loss, we require the sparsity level satisfying $s_Y=o(n/\log p)$. For $L_1$-loss, the restriction becomes $s_Y=o(\sqrt{n/\log p})$. The sparsity level is allowed to be diverging with $n$ and $p$. ## Weakness 4 With the equation (2.3), given the widely imposed sub-gaussian assumption on the predictors, any bounded transformation function $h(Y)$ would lead the transformed error term $e$ being sub-gaussian, even if the original error term $\epsilon$ comes from Cauchy distribution. However, the response-distribution transformation is preferred due to the following additional reasons. * As noted by Rejchel and Bogdan (2020), in the empirical distribution function, the term $\sum_{j=1}^n I(Y_j \leq Y_i)$ is the rank of $Y_i$. Since statistics with ranks such as Wilcoxon test and the Kruskall-Wallis ANOVA test, are well-known to be robust, this then intuitively explains why our procedures with distribution function are robust with respect to outliers in response. * The distribution function is very easy to estimate, and thus, our approach is straightforward to implement and understand. * Lastly, besides being robust, the choice of $h(Y) = F(Y)$ would also have relatively high efficiency. Specifically, we conduct detailed simulations to illustrate this point. The simulation settings are summarized in global response, and the simulated results are displayed in the attached pdf file. Besides distribution function, we also consider $h(Y) = Y$ and $h(Y) = \text{sigmoid}(Y) = 1/\\{1+\exp(-Y)\\}$. Compared to other functions, our procedure has high power performance under nearly all the settings. The better performance compared to $\text{sigmoid}(Y)$ demonstrates that the superiority of our procedure is not merely due to the boundedness of the transformation function. ## Question 1 To simplify the presentation, all main assumptions are given in the A.5 of the Appendix. In Assumption A.1 (ii), we assume that $c\sqrt{\log p/n}\leq \lambda_{X},\lambda_{Y}\leq C\sqrt{\log p/n}$ for some constants $0<c\leq C$. ## Question 2 Please refer to response to weakness 4 for details. ## Question 3 After some careful calculation, we agree with you that for sub-Exponential variables, Lemma A.11 is weaker than Bernstein’s inequality. Let $X_1,\ldots,X_n$ be independent, mean zero, sub-Exponential random variables. Then for $t\geq 0$, Bernstein inequality implies that$$P\\{\vert\frac{1}{n}\sum_{i=1}^{n}X_{i}\bigr\vert\geq t\\}\leq 2\exp\\{-L\min(\frac{t^{2}}{K^{2}},\frac{t}{K})n\\},$$where $L$ is a constant and $K = \max_{1\leq i\leq n}\lVert X_{i}\rVert_{\psi_1}$. And Lemma A.11 implies that$$P\\{\vert\frac{1}{n}\sum_{i=1}^{n}X_i\vert\geq t\\}\leq 4\exp\\{-\frac{1}{8}n^{1/3}t^{2/3}\\} + 4n L_{1}\exp\\{-\frac{L_{2}}{2}n^{1/3}t^{2/3}\\}$$for $t\geq \sqrt{8E(X_i^2)/n}$, where $L_{1}$ and $L_{2}$ are constants. For ease of comparsion, suppose that $t \asymp n^{-c}$ for $c\in(0,1/2]$. Note that $$2\exp\\{-\frac{Lt^{2}n}{K^{2}}\\} \ll 4n L_{1}\exp\\{-\frac{L_{2}}{2}n^{1/3}t^{2/3}\\}$$when $n$ is sufficiently large. That is, the bound of Bernstein's inequality is sharper than the bound of Lemma A.11. ## Question 4 Let $g:[0,\infty)\rightarrow[0,\infty)$ be a non-decreasing convex function with $g(0)=0$. The ``$g$-Orlicz'' norm of a random variable $X$ is $\Vert X\Vert_{g} := \inf\\{\eta>0:E[g(\vert X\vert/\eta)]\leq 1\\}$. Specifically, let $\psi_r(x)=\exp(x^r) - 1$. The sub-Weibull norm of a r.v. $X$ for any $r>0$ is defined as $\Vert X\Vert_{\psi_{r}}$. The r.v. $X$ follows sub-Weibull distribution for $r>0$ is equivalent to $\Vert X\Vert_{\psi_r}$ is bounded. When $r=1$, $X$ is sub-Exponential. In this case, Bernstein inequality is applicable when $r=1$. While Lemma A.11 is applicable for $r>0$. Therefore, Lemma A.11 is a generalization of Bernstein's inequality for general sub-Weibull distributions. In this article, Lemma A.11 is adopted in the proof of Lemma A.14. In this case, we need to derive the bound of $\max_{j\in\mathcal{G}}\vert\sigma_{j}^{2} - \tilde\sigma_j^2\vert$. For $j\in\mathcal{G}$, $\sigma_j^2-\tilde\sigma_j^2$ can be represented as $\sigma_j^2-\tilde\sigma_j^2=\frac{1}{n}\sum_{i=1}^{n}Z_i$, where $Z_1,\ldots,Z_n$ are zero mean, i.i.d. variables with bounded sub-Weibull norm for $r=1/2$.
null
null
Rebuttal 1: Rebuttal: Dear Program Chairs, Senior Area Chairs, Area Chairs and Reviewers, Thank all of you for your insightful comments and valuable suggestions, which have significantly enhanced the quality of this work. In this response, all the comments have been carefully addressed and accommodated. A summary of response is as follows. * Discussions of robustness with respect to perturbations in the responses. In the revision, we carefully discuss the robustness of our inference procedure based on the tool of efficient influence function. Please see the response to weakness 1 of Reviewer wCvo for details. * Discussions of the response-distribution transformation function. In the revision, we discuss the reasons why the response-distribution transformation is preferred. Please see responses to weakness 4 of Reviewer wCvo and question 1 of Reviewer BqNs for details. * Relationships between Lemma A.11 and Bernstein' inequality. Please see responses to questions 3 and 4 of Reviewer wCvo for details. * Additional simulation studies. In the revision, we have conducted additional simulation studies to illustrate the robustness of our procedure. Please refer to the attached pdf file for details. * **The simulation settings are summarized as follows.** We consider the following two models: * Model 1: Linear model: $Y = X^\top\beta + \epsilon$. * Model 2: Non-linear model: $Y = \exp(X^\top\beta + \epsilon)$. The regression coefficients $\beta = (\beta_{1},\beta_{2},\ldots,\beta_{p})^\top$ are generated as $\beta_{j} = \delta$ for $j = 1,\ldots,6$ and $\beta_{j}=0$ otherwise, where $\delta$ can be regarded as a signal strength parameter. We generate the error term from the standard normal distribution. We add outliers to pollute the observations: $p_{out}$ of the responses are picked at random and increased by $m_{out}$-times maximum of original responses. Specifically, the detailed settings for the above parameters are as follows. Firstly, we consider the sample size $n=500$ and the dimension $p=800$. Secondly, we set the signal strength parameter $\delta$ to vary from $\\{0.1,0.3,0.5\\}$. Thirdly, we fix $m_{out}$ to 10. Lastly, we vary $p_{out}$ from $0$ to $0.5$ in increments of $0.1$. For more complete comparisons, we consider three transformation procedures: (1) $h(Y) = F(Y)$ (Our method); (2) $h(Y) = Y$; (3) $h(Y)=\text{sigmoid}(Y)=1/\\{1+\exp(-Y)\\}$. Other simulation settings are the same as described in section 4 of the main text. The simulation results are summarized in Figure R.1 of the attached pdf file. * Figure R.1 summarizes the results of empirical type I error and empirical power with the significant level of $\alpha=0.05$ for different methods when the error term follows the standard normal distribution. Firstly, under the null hypothesis ($G_1$), $h(Y) = \text{sigmoid}(Y)$ cannot control the type I error for Model 2, while other procedures perform well. Secondly, under the alternative hypothesis ($G_2$), the empirical powers of other procedures decrease rapidly with the increase of $p_{out}$, while the powers of our procedure remain stable, which is particularly noticeable when $\delta = 0.5$. This finding indicates that our method has strong robustness when the responses are polluted. Thirdly, our method performs well for both Model 1 and Model 2, indicating that our method is robust across different single-index models. * Discussion about the background of the problem. In the response, we discuss the practical background of the problem and demonstrate the practical value by a real data analysis. Please see the response to weaknesses of Reviewer Tg8m for details. * All the other comments from the Program Chairs, Senior Area Chairs, Area Chairs and Reviewers are also carefully addressed. Thank you very much for giving us an opportunity to revise the paper. Pdf: /pdf/7cc704c97fe3697b445bbe8aa82a198caf189ae3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Modular Conditional Diffusion Framework for Image Reconstruction
Accept (poster)
Summary: This paper proposes a modular conditional diffusion framework for image reconstruction. Specifically, a small module is trained and combined with pretrained IR networks and DPMs. Experiments show that this method is effective. Strengths: 1. According to the quantitative and qualitative results, the method shows great performances on several tasks. 2. The method can be applied to tasks combining task-specific pre-trained models, making the method easy to be extended. 3. The module requiring additional training is small and the method can be trained efficiently. Weaknesses: - The figures in this paper are of poor quality. The authors should revise the figures to make the formulas clear and professional. - Overall I think the modular framework is of limited insight. The design is clean but not so informative for image restoration and some other related domains. - If I understand correctly, this method is agnostic to the specific tasks of image restoration, though the authors present several tasks in this paper, it would be good if more qualitative results on various tasks are given. - I think the arguements in line 45-49 are not proper. As the authors discuss the conditional space, sometimes we may not need too much time for the DPM training especially we have the base model and can perform efficient fine-tuning. Also, the case that is discussed (1M images and 120GPU-days for LDM) does not precisely align with the conditional context. These values correspond to the original LDM but we today usually use SD which is trained on huger datasets and takes more time training. Here if we want to say a single condition space tailored DPM may require expensive resources, maybe it is better include cases on finetuning SD or other domain-specific DPM. - For some baseline methos, it is unclear how they are implemented and compared. For example, LDM is selected as the baseline which gives qualitative and quantitative, but I am not sure how it is used and which checkpoint is selected. The authors can clearly indicate how baselines are implemented. Technical Quality: 3 Clarity: 2 Questions for Authors: Considering the overall contribution and the weaknesses, I choose "Borderline reject". I am willing to adjust my score for my potential misunderstanding. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q18** The figures in this paper are of poor quality. The authors should revise the figures to make the formulas clear and professional. In our submitted manuscript we made sure that both Figures 1 and 2 are vector images and that the formulas inside these figures follow the same LaTeX style as we use inside the text. Following the reviewer's comment, we have checked the visual appearance of them in different pdf viewers and found no mentioned issues. We include the pdf file with raster screenshots of both figures as an attachment to our **general response** and kindly ask the reviewer to compare their visual appearance with those in our manuscript. For the camera-ready version we plan to expand the horizontal dimension of Figure 1 to match the full page width, which we hope that will enhance its appearance. > **Q19** Overall I think the modular framework is of limited insight. The design is clean but not so informative for image restoration and some other related domains. We respectfully disagree with this particular assessment of the reviewer. Previous DPM-based works [40, 60, 78] have attempted to directly estimate the conditional expectation $\mathbb{E} \left[\bf{x_0} | \bf{y}, \bf{x_t}\right]$ (or residual noise) and have achieved outstanding performance by sacrificing the generalizability to other tasks. In contrast, we demonstrate that $\mathbb{E} \left[\bf{x_0} | \bf{y}, \bf{x_t}\right]$ in Eq. (5) can be expressed as a fusion of two separate expectations: $\mathbb{E} \left[\bf{x_0} | \bf{y}\right]$ and $\mathbb{E} \left[\bf{x_0} | \bf{x_t}\right]$. This approach allows us to learn a single unconditional generative denoising function to estimate $\mathbb{E} \left[\bf{x_0} | \bf{x_t}\right]$ and apply it to different image reconstruction problems with minimal computational overhead, by training our fusion module on a small dataset. Further, we have verified the validity of our approach on three different and very competitive blind image reconstruction tasks by performing comparisons with competitive sota methods on several public benchmarks. > **Q20** If I understand correctly, this method is agnostic to the specific tasks of image restoration, though the authors present several tasks in this paper, it would be good if more qualitative results on various tasks are given. Our method is not fully agnostic to the image restoration task at hand, since in order to apply it to a new unseen task the training of a small Fusion network (0.7M params) is required. Due to page limitations and in order to reserve enough space to explain adequately the key ideas of our work, similar to the common practice followed by several other existing works ([78],[59],[60],[40],[21]), we have decided to conduct experiments and perform comparisons on three challenging tasks from different modalities, while we plan to consider additional reconstruction tasks as a future work. > **Q21** I think the arguments in line 45-49 are not proper. As the authors discuss the conditional space, sometimes we may not need too much time for the DPM training especially we have the base model and can perform efficient fine-tuning. Also, the case that is discussed (1M images and 120GPU-days for LDM) does not precisely align with the conditional context. These values correspond to the original LDM but we today usually use SD which is trained on huger datasets and takes more time training. Here if we want to say a single condition space tailored DPM may require expensive resources, maybe it is better to include cases on finetuning SD or other domain-specific DPM. This is a valid point raised by the reviewer and in the revised version we plan to include a discussion about existing methods that fine-tunes DPMs. One such example is the T2I-Adapter [A], which requires 12 GPU-days on an NVIDIA V100 32GB GPU and utilizes between 164K and 500K images, depending on the task. Another relevant example is ControlNet [B], which demands between 4 and 25 GPU-days on an NVIDIA A100 80GB GPU (approximately three times faster than the NVIDIA V100) and employs training datasets ranging from 25K to 3M images. Furthermore, the aforementioned adapters increase the training parameters at least by 77M and up to half of the parameter size of Stable Diffusion. [A] Mou, Chong, et al. "T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 5. 2024. [B] Zhang, Lvmin, Anyi Rao, and Maneesh Agrawala. "Adding conditional control to text-to-image diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. > **Q22** For some baseline methods, it is unclear how they are implemented and compared. For example, LDM is selected as the baseline which gives qualitative and quantitative, but I am not sure how it is used and which checkpoint is selected. The authors can clearly indicate how baselines are implemented. The LDM and its checkpoint that we have used for comparison is the original model that was trained for the SISR task: https://ommer-lab.com/files/latent-diffusion/sr_bsr.zip. Following the reviewer's suggestion, we have included an additional section in the Appendix with the description and links to the code for all baselines that we report in the manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The rebuttal has addressed my concerns, and I have accordingly raised my rating to 5. Regarding Q18, my concern was that the figure appears to be of poor aesthetic quality and less professional. It would be beneficial for the framework figure to be clearer and more informative. --- Rebuttal 2: Title: Thank you for acknowledging our rebuttal and increasing your score Comment: We are very pleased that our response addressed all the reviewer's comments, and we thank the reviewer for raising the score. Regarding Q18, we now understand the reviewer's concern, which we also find similar to the one raised by the reviewer hKXQ. Following the reviewer's suggestion, apart from changing its color scheme, we will also re-design the figure content to enhance its overall informativeness. If all other concerns are addressed, we would be grateful if the reviewer increases the confidence score.
Summary: This paper proposes a new approach to improving the efficiency and applicability of Diffusion Probabilistic Models (DPMs) for various image restoration (IR) tasks. A new modular diffusion probabilistic image restoration (DP-IR) framework combines pre-trained state-of-the-art IR networks with generative DPMs. This framework requires only a small additional module (0.7M parameters) to be trained for specific IR tasks, making it more practical and less computationally expensive. This framework is evaluated on four benchmarks, covering tasks such as burst JDD-SR, dynamic scene deblurring, and super-resolution. Strengths: Originality: Modular framework and accelerated sampling strategy. Quality: Robust methodology and comprehensive experimental validation. Clarity: Clear presentation. Significance: By reducing the computational requirements and enhancing the generalizability of IR methods, the framework could facilitate the adoption of advanced image restoration techniques in real-world scenarios where computational resources are limited. Weaknesses: Computational cost analysis: A more detailed analysis of the computational cost, including memory usage and inference time, would be helpful. Experimental validation: While the paper demonstrates strong performance on specific benchmarks, the experiments could benefit from additional datasets to further validate the generalizability of the proposed framework. Technical Quality: 2 Clarity: 3 Questions for Authors: How sensitive is your framework to the choice of hyperparameters? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Elaborate on future work directions that could address these limitations. For instance, proposing specific research avenues to enhance the generalizability of the framework, or developing more adaptive hyperparameter tuning methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q14** Computational cost analysis: A more detailed analysis of the computational cost, including memory usage and inference time, would be helpful. While we agree with the reviewer that the inclusion of such information has practical value, we would also like to highlight existing problems that prevents us from doing so: 1. Both memory usage and inference time are highly dependent on the efficiency of implementation of the particular method and the hardware equipment used. Moreover, depending on the implementation, the memory usage can be sacrificed for faster inference and vice-versa. 2. All our diffusion-based competitors in dynamic scene deblurring problem and one competitor in SISR (InDI) do not offer publicly available implementations, making such direct comparisons impossible. To avoid any speculations on these subjects, we have refrained from performing such comparisons. For our computational cost analysis we rely solely on the Neural Function Evaluations (NFEs) to explicitly compare the inference compute, and the number of parameters to implicitly compare the training data and training compute. For the NFEs comparison we assume that all the competing diffusion-based methods have similar computational complexity in terms of FLOPs per single NFE. Under this assumption, to perform a comparison for the inference costs it is sufficient to just compare the amount of NFEs. Nevertheless, we agree with the reviewer, that this is not the best approach for a computational cost analysis, and for this reason we additionally report below the exact FLOPs per 720p input for competing methods in dynamic scene deblurring problem. | Method | TFLOPs (equation) | TFLOPs Total $\downarrow$ | |--------|-----------------------|---------------------------| | DvSR | 1.2$\times$N + 4.8 | 604.8 | | icDPM | 4.8$\times$N + 5.2 | 2405.2 | | InDI | 4.8$\times$N| 48.0 | | Ours | 4.3$\times$N + 1.9 | **23.4** | The proper way to interpret these equations is as follows: $\textrm{TFLOPs Total} = x \times N + y$, where $x$ is the TFLOPs complexity for a single backbone pass within the diffusion process, $N$ is the total number of neural function evaluations (NFEs) per sampling process, and $y$ is the complexity of sub-modules that have to be run once per image (e.g. Image Restoration network in our method, pre-processing net for icDPM and DvSR). Based on these results, we observe that the computational cost of our method is significantly lower compared to our diffusion-based competitors. We have included this information regarding the FLOPs in the revised version of the manuscript and hope that it enhances the computational cost analysis. > **Q15** Experimental validation: While the paper demonstrates strong performance on specific benchmarks, the experiments could benefit from additional datasets to further validate the generalizability of the proposed framework. We validated our method using widely recognized and publicly available datasets to ensure a fair and reproducible assessment of our approach. Utilizing these commonly used datasets allows for more straightforward benchmarking and comparison with other methods. However, for other less commonly used or proprietary datasets, it becomes difficult to make direct comparisons. This is primarily due to the fact that many state-of-the-art (SOTA) approaches do not release their implementations publicly. Consequently, without access to these implementations, reproducing their results or performing a head-to-head comparison on different datasets poses significant challenges. Therefore, our evaluation is limited to datasets for which there is widely available and accessible benchmark data. > **Q16** How sensitive is your framework to the choice of hyperparameters? The main hyperparameter of our framework is the time threshold $\tau$, and we refer the reviewer to Table 7 and Table 10 of our manuscript for the detailed ablation study of its influence on the reconstruction quality. All the other parameters of our framework, specifically the total number of diffusion steps $T$ and diffusion coefficients $\\{\beta_t\\}_{t=1}^T$ were taken as-is from the prior literature on diffusion models. We should note, that no other works on image reconstruction via reverse diffusion perform the sensitivity study wrt the values of this sequence, so in our work we also refrained from doing that and leave this topic as a possible future research direction. However, we have investigated the sensitivity of our approach to the choice of the specific modules, and we refer the reviewer to Table 5 in the main text and section F in the Appendix for details of this study. > **Q17** Elaborate on future work directions that could address these limitations. For instance, proposing specific research avenues to enhance the generalizability of the framework, or developing more adaptive hyperparameter tuning methods. Please find our comment in section "Limitations and further perspectives" of our general response.
Summary: This manuscript proposed a modular conditional diffusion model for image reconstruction, consisting of three components: a pre-trained image restoration network, a denoising network, and a fusion network. Strengths: The model reduces computational load by minimizing the number of network modules that need to be trained and by utilizing lightweight networks. The fusion network is the only component that needs to be trained for each specific IR task. The model also achieves significant acceleration in the sampling process without any loss of reconstruction quality. Weaknesses: The analysis of trade-offs between model size, speed, and performance may be insufficient. There is insufficient explanation regarding the reasons for selecting specific modules. Technical Quality: 2 Clarity: 2 Questions for Authors:  The standard and rationale for selecting the baseline models have not been detailed. For example, why were specific image restoration networks, denoising networks, and fusion networks chosen? Were these choices based on certain performance metrics, relevant literature, or experimental results? Providing this explanation will help readers understand the rationale behind the model design.  Were other acceleration techniques used during the sampling process? If so, please provide specific technical details and experimental results, comparing performance differences before and after applying these acceleration techniques.  It is suggested that the authors revisit the formula in line 115 to ensure that each part is clear and understandable. If there are any symbols or variables that are not clearly defined, please provide explicit definitions. Consider adding explanatory text before and after the formula to help readers understand its meaning and application.  It is advisable for the authors to verify if the PSNR values in Table 5 are consistent with those in other tables. If all tables should indicate the PSNR Target values as ∞, ensure consistency across all tables to avoid confusion for readers during their review.  It is recommended that the authors redesign the color scheme of Figure 1 to enhance its aesthetic appeal and improve clarity for better understanding. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors do not point out the limitations of the work and do not offer further perspectives. I hope it will be improved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q8** The standard and rationale for selecting the baseline models have not been detailed. For example, why were specific image restoration networks, denoising networks, and fusion networks chosen? Were these choices based on certain performance metrics, relevant literature, or experimental results? Providing this explanation will help readers understand the rationale behind the model design. Please refer to the section "Network architectures choice" in general response. Thank you! > **Q9** Were other acceleration techniques used during the sampling process? If so, please provide specific technical details and experimental results, comparing performance differences before and after applying these acceleration techniques As we mention in lines 244-246 and 270-273 of the manuscript, for the SISR task we further employ the DDIM acceleration technique during the sampling process. Following the reviewer's suggestion, we will include an additional section in the appendix with a short technical description of this technique and we will refer to it inside the main text of the manuscript. During our experiments, we didn't observe significant quantitative/qualitative difference after applying this acceleration technique. To be more specific, we provide below a comparison of the reconstruction performance with and without utilizing DDIM, which we will also include in the newly added section in the appendix. | Method | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | TOPIQ$_{\Delta} $ $\downarrow$ | NFE $\downarrow$ | |------------------|-----------------|-----------------|--------------------|--------------------------------|------------------| | Ours (with DDIM) | 28.12 | 0.793 | 0.140 | $\textbf{0.002}$ | 51 | | Ours w/o DDIM | $\textbf{28.16}$ | $\textbf{0.794}$ | $\textbf{0.139}$ | 0.007 | 251 | > **Q10** It is suggested that the authors revisit the formula in line 115 to ensure that each part is clear and understandable. If there are any symbols or variables that are not clearly defined, please provide explicit definitions. Consider adding explanatory text before and after the formula to help readers understand its meaning and application. We thank the reviewer for his suggestion. Indeed, in the original version of the manuscript we have missed explaining the purpose of the $\beta\_{t}$ parameter. In the revised version we clarify that $\beta\_{t}$ is the noise scheduling parameter for the forward process. We will also re-write the formula to make it more compact and clear. > **Q11** It is advisable for the authors to verify if the PSNR values in Table 5 are consistent with those in other tables. If all tables should indicate the PSNR Target values as $\inf$, ensure consistency across all tables to avoid confusion for readers during their review.} We thank the reviewer for raising this point. Indeed, in Table 5 there was a typo where we used the word "inf" instead of the "inf" sign, which has now been corrected. > **Q12** It is recommended that the authors redesign the color scheme of Figure 1 to enhance its aesthetic appeal and improve clarity for better understanding. Following the reviewer's suggestion, we have redesigned Figure 1 and change its horizontal dimension in order to match the text width. We believe that these changes will improve its aesthetic appeal. > **Q13** The authors do not point out the limitations of the work and do not offer further perspectives. I hope it will be improved. Please see section "Limitations and further perspectives" in our general response.
Summary: This paper proposes a modular diffusion probabilistic IR framework to combine the performance benefits of existing pre-trained state-of-the-art IR networks and generative DPMs with a light-weight fusion network. Experimental results on burst JDD-SR, dynamic scene deblurring, and super-resolution demonstrate its superior performance and better perceptual quality. Strengths: 1. The proposed framework is based on the existing IR network and equipped with a denoising module, which is suitable for various reconstruction problems without retraining and reduces the burden of computing resources and training data. 2. An accelerated sampling algorithm is proposed to further reduce the computational burden during inference. 3. Experimental results on burst JDD-SR, dynamic scene deblurring, and super-resolution demonstrate its superior performance and better perceptual quality, highlighting DP-IR’s versatility. Weaknesses: 1. The contribution of accelerated sampling seems trivial. As the author said, references [13],[52] already proposed a similar idea. 2. Although the author criticizes the task-specific nature of existing solutions, DP-IR still needs to train the fusion network for each specific task, which makes it less versatile. 3. Absence of necessary ablation experiment: a) The effectiveness of Eq.10’s sampling strategy; b)Inference without image restoration network, i.e., input with y directly. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why utilize a smaller version of MIRNet as the denoiser, not other architecture used for diffusion? 2. Why not utilize open-sourced diffusion model, like stable diffusion. 3. In Eq. 10, the author gets x_{\tau} through posterior sampling from x_{T} and E[x_{0}|y], why not directly noising E[x_{0}|y] to x_{\tau}? The idea that “Lemma 3.2 makes the final reconstruction quality unaffected” is confusing, because E[x_{0}|y] itself is not the true x_{0}. I think an ablation study is needed to verify your idea. 4. Although the ablation of different combination of image restoration network and denoising network is given, an ablation of only utilize image restoration network or only utilize denoising network is also needed. 5. Minor error: a) “in section 3.4” in line 228 may be “in Table 1”; b) The Table 1 should be table 2? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1** The contribution of accelerated sampling seems trivial. As the author said, references [13],[52] already proposed a similar idea. We agree with the reviewer that our acceleration strategy indeed bears certain similarities to the strategies presented in the referred papers. However, as we also highlight in lines 233-244 of the manuscript, our approach is more general than those proposed in [13],[52]. Below we discuss in more detail both the conceptual and technical differences, and show empirically that our approach leads to better reconstruction results. ### Conceptual difference Using our notation, both [13] and [52] propose to start the reverse process from a timestep $\tau$ and a noisy version $\textbf{x}_\tau$ of the initial estimate of $\textbf{x}_0$, which we denote by $\mathbb{E}[\textbf{x}_0|\textbf{y}]$. The main conceptual difference of our approach is that in these cases $\textbf{x}\_\tau$ is obtained using the forward diffusion process, while in our case we end up in $\textbf{x}\_\tau$ using the reverse process. The initial motivation for our proposed approach is also different. In particular, while we motivate our procedure from a probabilistic viewpoint and propose to approximate the conditional score function as a composition of three functions, the authors in [13] base their strategy on the contrastive property of reverse SDEs, while the authors in [52] use the re-projection of unrealistic images to the manifold of natural images in the noisy latent space. ### Technical difference Given that in our work we consider the standard DDPM realization of diffusion process (VP-SDE), we will explain the existing differences under this scenario. The authors of [13] and [52] propose to parameterize $\textbf{x}\_\tau$ as $\textbf{x}\_\tau=\sqrt{\bar{\alpha}\_\tau}\mathbb{E}[\textbf{x}\_{0}|\textbf{y}]+\sqrt{1-\bar{\alpha}\_\tau} \textbf{z}$, where $\textbf{z}\sim\mathcal{N}(\textbf{0},\textbf{I}).$ In contrast, in our case by using Eq.(10) we adopt the following parameterization: $\textbf{x}\_\tau=\sum_{i=0}^{T-\tau-1}\Gamma\_{T-i-1}^{\tau+1}\gamma\_{T-i}\mathbb{E}[\textbf{x}\_0|\textbf{y}]+\Gamma\_T^{\tau+1}\textbf{x}\_T+\sqrt{\sum_{i=0}^{T-\tau-1}(\Gamma\_{T-i-1}^{\tau+1})^2\sigma\_{T - i}^2}\textbf{z}$, where $\textbf{z},\textbf{x}\_T\sim\mathcal{N}(\textbf{0},\textbf{I})$. As we have already highlighted in lines 233-244 of the manuscript, our parameterization is more general and it is possible to show by induction, that under certain conditions it leads to the exact same $\textbf{x}_\tau$ as in [13] and [52]. Finally, to experimentally demonstrate that our approach exhibits certain benefits compared to the ones described in [13] and [52], we conducted additional comparisons for the SISR problem between the different sampling strategies. From these results it is clear that our proposed strategy works better in practice and leads to superior results both in terms of fidelity and perceptual quality. |Acceleration Strategy|PSNR $\uparrow$|SSIM $\uparrow$|LPIPS $\downarrow$|TOPIQ$_{\Delta}$ $\downarrow$|NFE $\downarrow$| |-|-|-|-|-|-| |Ours|$\textbf{28.12}$|$\textbf{0.793}$|$\textbf{0.140}$|$\textbf{0.002}$|51| |[13],[52]|28.05|0.783|0.142|0.016|51| > **Q2** Although the author criticizes the task-specific nature of existing solutions, DP-IR still needs to train the fusion network for each specific task, which makes it less versatile. Please see section "Versatility of proposed framework" in our general response. > **Q3** Absence of necessary ablation experiment: a) The effectiveness of Eq.10’s sampling strategy; b)Inference without image restoration network, i.e., input with y directly. a) Regarding the effectiveness of our sampling strategy as described in Eq. (10), we refer the reviewer to our answer to his first comment **Q1**. b) It is important to note that the condition $\textbf{y}$ can represent the measurement signal from different imaging modalities. As a result $\textbf{y}$ typically lies in a different domain than the one of the target signal $\textbf{x}\_{0}$. Therefore, the direct fusion of the output of the denoising module $\boldsymbol{\phi}_{\boldsymbol{\theta}_D}^D\left(\tilde{\boldsymbol{x}}_t, \tilde{\sigma}_t\right)$ and the condition $\textbf{y}$ is not always feasible, but requires the processing of $\textbf{y}$ to ensure that both signals lie in a common domain. An illustrative example of such a case is the burst JDD-SR (Joint Demoisaicing, Denoising, and Super-Resolution) task, where the measurement signal $\textbf{y}$ consists of several image frames with each one lying in the domain of raw (mosaicked) and low-resolution images. Had we chosen to not process $\textbf{y}$ with an image restoration network, then we would have to deploy a fusion module which would be required to pre-process $\textbf{y}$ before performing the actual fusion. In this case, for every different inverse problem we would have to carefully design a specific architecture for the fusion model. Such a strategy would be less versatile than our current approach and would not take advantage of existing pre-trained sota restoration networks. > **Q4** Why utilize a smaller version of MIRNet as the denoiser, not other architecture used for diffusion? Why not utilize open-sourced diffusion model, like stable diffusion? Please see section "Network architectures choice" in our general response. > **Q5** In Eq. 10, the author gets $\textbf{x}\_{\tau}$ through posterior sampling from $\textbf{x}\_{T}$ and $E[\textbf{x}\_{0}|\textbf{y}]$, why not directly noising $E[\textbf{x}\_{0}|\textbf{y}]$ to $\textbf{x}\_{\tau}$? We refer the reviewer to our answer to **Q1** where we motivate our choice for doing so. **For the rest of the responses please see our official comment below** --- Rebuttal 2: Title: continuation: Q6-Q7 Comment: > **Q6** The idea that “Lemma 3.2 makes the final reconstruction quality unaffected” is confusing, because $E[\textbf{x}\_{0}|\textbf{y}]$,itself is not the true $\textbf{x}\_{0}$. I think an ablation study is needed to verify your idea. We thank the reviewer for raising a valid point. What we meant with our statement "which as a consequence of Lemma 3.2 does not compromise the final reconstruction quality" in lines 235-236 is that if we approximate $\textbf{x}\_{0}$ with $E[\textbf{x}\_{0}|\textbf{y}]$, then the reconstruction result is going to be the same either we utilize the multi-step reverse diffusion process or the one-step process that we described on Lemma 3.2. Our statement **was not related** to the equivalence of the diffusion process between the utilization of the true signal $\textbf{x}\_{0}$ and $E[\textbf{x}\_{0}|\textbf{y}]$. Based on the reviewer's comment we now understand that the current form of this statement can lead to confusion and we will reformulate it in the revised version of the manuscript. > **Q7** Although the ablation of different combination of image restoration network and denoising network is given, an ablation of only utilize image restoration network or only utilize denoising network is also needed. Regarding the utilization of only the image restoration networks, we report such comparisons and results in Tables 2-4. Specifically, for the task of JDD-SR in our framework we employed BSRT-small, for dynamic scene deblurring we employed FFTformer, while for SR we employd SwinIR. In Tables 2-4 we report the restoration performance of all these networks when used in a standalone way. From these results we can observe that our framework leads to a noticeable improvement in terms of perceptual quality, which is the primary metric that DPM methods aim to improve. Regarding the utilization of only the denoising module, we are afraid that such comparisons will not lead to any meaningful conclusions since any proper solution of the restoration tasks under study, would have to be consistent with the measurements $\textbf{y}$. Given the generative capabilities of existing Denoising Probabilistic Models (DPM) modules, omitting the condition $\textbf{y}$ could lead to unpredictable behavior and result in an image that does not align with the original signal $\textbf{y}$. In the scenario that we instead consider a conditional DPM method, then we essentially end-up to the methods that we are also comparing against in Tables 3-4 and which are specifically designed for one particular task. >$\textbf{Q}$ Minor error: a) “in section 3.4” in line 228 may be “in Table 1”; b) The Table 1 should be table 2? Indeed, we have fixed wrong references in the revised version: "Table 1" -> "Figure 1" and "section 3.4" -> "figure 2" in line 228. Thank you for pointing out this issue.
Rebuttal 1: Rebuttal: # General response We would like to thank all reviewers for the insightful questions and valuable suggestions. In this response we would like to address common questions that were asked by more than one reviewer. ## Limitations and further perspectives In the manuscript (lines 356-363) we have briefly described one limitation of our framework related to the optimal choice of $\tau$, which is dependent of the particular reconstruction task at hand and needs to be experimentally selected. Another limitation that exists, and which we have added in the revised version of the manuscript, is that the performance of our method is bounded by the performance of the utilized backbone networks (Denoising and IR module). Therefore, for a novel image restoration task, where pre-trained IR network does not exist, our framework is not applicable. Finally, in the case of imaging modalities (e.g. medical imaging) where a score-matching network (denoising module) has not been trained for, it is important to either fine-tune an existing denoising module or completely re-train one using appropriate image data. ## Network architectures choice **Denoising and IRNet modules.** The reason for utilizing a smaller size of MIRNet as a Denoising module is that we wanted to approximately match the number of parameters and the computational complexity of the networks used in our framework with those of the alternative methods under study, in order to ensure a fair evaluation and comparison. This strategy has allowed us to achieve direct performance comparisons under similar conditions. Regarding the utilization of stable diffusion, reviewer WWsb makes a valid point and according to our ablation studies (see Table 5) we observe that by utilizing more powerful modules we can expect improvements on the overall performance. The main reason for not using Stable Diffusion is that given the network size constraints that we had, as we explained above, we experimentally found that the selected network architecture provided the best results. **Fusion Module.** We have experimented with several basic fusion architectures, but we did not conduct an exhaustive research about the most performing architecture. Our proposed fusion module serves as a proof of concept for the validity of our overall proposed framework and the performance improvements that it can achieve. A more in-depth investigation of appropriate fusion architectures can serve as a very interesting future research direction. ## Versatility of proposed framework Our strategy cannot work out of the box for every inverse problem but requires tuning of a relatively small 0.7M parameter fusion network for each particular problem. Nevertheless, we would like to emphasize that the main focus of our work is blind image restoration problems. To the best of our knowledge, all diffusion-based approaches that have been proposed in the literature to deal with such tasks, require the training of far larger conditional backbone networks (~10-100M params). This turns out to be significantly more challenging both in terms of necessary training data and computational resources. To showcase this, we provide an indicative example below. If we adopt the existing diffusion-based SISR baselines and train them for a completely different restoration problem, by following the original authors' training strategies it turns out that the computational and data requirements are significantly higher than those of our method. |Method|Params required|Data required| |-|-|-| |Ours|1x|1x| |SRDiff|$\sim$34x|$\sim$4x| |LDM|$\sim$240x|$\sim$1000x| |InDI|$\sim$89x|$\sim$ 1x| |IDM|$\sim$167x|$\sim$ 1x| Based on these data, we can safely state that our strategy provides a reasonable trade-off between the required training complexity and the competitive performance of our method to a variety of blind inverse problems. ## Screenshot of Raster figures of our framework Below, we have attached pdf of raster figures to compare the visual quality. Pdf: /pdf/7ee36a1abefba9c8084185ec918f9067be1a77eb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Covariate Shift Corrected Conditional Randomization Test
Accept (poster)
Summary: The paper introduces a novel approach to addressing covariate shift in conditional independence tests. By leveraging importance weights and the control variates method,the paper proposes Covariate Shift Corrected Pearson Chi-squared Conditional Randomization (csPCR) test, which maintains asymptotic Type-I error control. In addition, a power enhancement is proposed, which reduces the variance in the test statistic and thus improve the power towards alternatives. The methodology is validated both theoretically and empirically, demonstrating superior performance in simulation studies and practical application to a COVID-19 treatment dataset, highlighting its efficacy in real-world scenarios. Strengths: Originality The paper presents original contributions by introducing a new approach to conditional independence testing under covariate shift. This method combines the idea of conditional randomization test, importance weighting, the control variates technique. Quality The work is of high quality, demonstrated through rigorous statistical theories on the validity and power, and comprehensive empirical validations. Clarity The paper is clearly written and well-organized. Each section has a clear message and readers can quickly grasp the main idea. Significance I consider this work as a significant contribution to the field of conditional independence testing and covariate shift. The method is easy-to-implement, and is more powerful than existing methods as shown in numerical experiments. Weaknesses: The paper does not leave serious technical weaknesses, but I am concerned about the limited applicability of this method in real-world problems. Based on my understanding, one would test for conditional independence directly on the target distribution if the outcomes in the target population were easy to collect. So csPCR is useful only when the outcomes are costly to collect. This questions several application examples mentioned in the paper. First, in the college admission example (from Line 37-53), why don't the economists just collect the college admission results from the target population and then conduct the vanilla PCR? Typically, the target outcomes have to be collected for followup analysis after the conditional independence test. Second, Section 5 introduces a real-world application of this method on Covid-19 pandemic data, where the source and the target data set are segmented based on time. Given such a sequential data collecting process, it is unclear whether the iid assumption still holds in the source and target distribution. See Questions for more details. Despite of the concerns above, I do believe that this method is useful for certain genetic and biomedical problems, where the outcomes are difficult to collect, but other covariates (such as SNPs) are relatively easy. Therefore, I suggest the authors clarify the application examples a bit by discussing more reasonable applications with some references. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Page 5, Algorithm 1: Besides the variance issues considered in Algorithm 2, it is clear that the choice of test statistic in Eq (4) also affects the power. Can you comment on how to choose the test statistic? In the numerical experiment, the test statistic is specified to be the product of $Y$ and $X$, which makes sense as it measures the "correlation" between Y and X. However, $Z$ is not included in the test statistic, which seems to decrease its power. Have you tried any other test statistics? Do difference choices affect the power significantly? For instance, one alternative is to use $(Y - \beta_y Z)(X - \beta_x Z)$, where $\beta_y and \beta_x$ are regression coefficients for $Y\sim Z$ and $X\sim Z$ on an independent data set from source distribution. 2. Page 4, Line 170-177: The paper argues that they used PCR as the baseline test because PCR is more powerful than the vanilla CRT. But on the other hand, CRT is also better than PCR in the sense that it has finite-sample type I error control, rather than asymptotic type I error control. It's worth mentioning this point as well. While I notice that the simulations in the paper shows satisfactory type I error control, the type I errors may get inflated if, for example, the data have heavy tails. 3. Page 9, Section 9: Relating to the weaknesses, have you run any tests to check the iid assumption on the Covid-19 data set? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The papers discuss about the limitations of their method under model misspecifications in Section 4, pointing out that the powerful enhancement can disappear when there is a full nonlinear component in the model. The authors have addressed the limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our sincere thanks for your reviewing work and insightful comments. Please see our responses to your comments and questions as below. $\textbf{Weaknesses 1}$: In the college admission example, why don't the economists just collect the college admission results from the target population? Response: In the college admission example, large-scale data on SAT scores is more easily accessible through high schools, exam preparation schools, or the College Board (the organization that administers the SAT exam). Conversely, college admission results are more difficult and costly to obtain, as they require individual-level surveys. Furthermore, students might be reluctant to disclose whether they were rejected in such surveys. Following your suggestion, we also plan to add another example about clinical study. Suppose we are interested in testing the treatment effect of some drug $X$ on some long-term outcome $Y$ such as five-year survival. In this case, researchers collect $V$ as some early endpoint surrogate that can be measured within a short term, usually a few weeks or months post treatment, e.g., tumor response rate in cancer treatment; see [e.g., VanderWeele 2013]. VanderWeele, Tyler J. "Surrogate measures and consistent surrogates." Biometrics 69.3 (2013): 561-565. $\textbf{Weaknesses 2}$: Given such a sequential data collecting process of the COVID data, it is unclear whether the iid assumption still holds. & $\textbf{Question 3}$: Have you run any tests to check the iid assumption? Response: The period spanning from January 2020 to November 30, 2021, in our source dataset captures the early waves of the COVID-19 pandemic, including the original strain and early variants such as Alpha and Beta. After November 30, 2021, our target dataset includes admissions during subsequent waves dominated by variants like Delta and Omicron. The segmentation of the dataset at November 30, 2021, is intentional to account for significant shifts in virus characteristics, public health policies, and medical treatments, while within source or target, the distribution of these covariates is assumed to be consistent, reflecting the stabilization of public health responses and medical treatments specific to early/later variants and the increased coverage of vaccinations. In addition, we plan to run rigorous tests (e.g., Kolmogorov-Smirnov test) to examine this i.i.d assumption within the source and target upon the acceptance of the paper. $\textbf{Question 1}$: Besides the variance issues considered in Algorithm 2, it is clear that the choice of test statistic in Eq (4) also affects the power. Can you comment on how to choose the test statistic? Response: The main principle of choosing the test statistic is to characterize the conditional dependency between $X$ and $Y$ under the alternative hypothesis. We agree that $YX$ may not be the optimal choice for the test statistic and that using $(Y-\hat{E}[Y\mid Z])(X - E[X\mid Z])$ could remove the confounding effect of $Z$. Inspired by this, we used $Y(X - E[X|Z])$ as the test statistic to conduct additional simulations. The results are presented in supplementary Figure R3. We find that $Y(X - E[X|Z])$ and $YX$ produce nearly the same power for both csPCR and csPCR(pe). We did not use $(Y-\hat{E}[Y\mid Z])(X - E[X\mid Z])$ because estimating $\hat{E}[Y\mid Z]$ requires sample splitting to estimate $\hat{E}[Y\mid Z]$ with some hold-out sample (otherwise, the theoretical Type-I error control of the PCR test cannot be guaranteed). An alternative strategy is to get an estimate of $P(Y\mid X,Z)$ on some hold-out training data as mentioned above, then naturally use $\log{P(Y\mid X,Z)}-\log{P(Y\mid Z)}$ as the test statistic [Tansey et al., 2022]. This could increase the ability to capture nonlinear dependence, but the sample splitting will generally cause a loss of power. The surrogate or auxiliary $V$ in our case could potentially help this hold-out training procedure and alleviate the power loss issue. We plan to add simulations on this upon the acceptance of our paper. Tansey, Wesley, et al. "The holdout randomization test for feature selection in black box models." Journal of Computational and Graphical Statistics 31.1 (2022): 151-162. $\textbf{Question 2}$: Comparison of PCR with vanilla CRT? Response: You are perfectly correct that PCR does not achieve exact Type-I error control. Meanwhile, we would like to point out that unlike other asymptotic inference approaches, PCR and csPCR are less prone to issues with heavy-tailed data, as they rely on the binary indicator variable of each subject to construct the chi-squared statistics. Nevertheless, small sample sizes could still cause Type-I error inflation in csPCR. We note that the PCR paper [Javanmard and Mehrabi, 2021] introduced a finite-sample Type-I error control version of PCR, which is achieved using concentration inequalities. Their strategy could be naturally incorporated with the current csPCR to achieve better (exact) Type-I control. We will add a discussion on this extension. Additionally, we agree that our statement regarding PCR being more powerful than CRT was overly broad. A more precise statement would be that PCR can handle some more challenging alternatives not addressed well by the vanilla CRT, as demonstrated in [Javanmard and Mehrabi, 2021]. In our numerical studies, we used the PCR construction on all benchmarks for a fairer comparison, and we found that our method outperforms the IS method with the vanilla CRT as well. $\textbf{References}$ Javanmard, Adel, and Mohammad Mehrabi. "Pearson chi-squared conditional randomization test." arXiv preprint arXiv:2111.00027 (2021). --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I don't have further concerns.
Summary: This paper proposes a new variation of CRT to be applied in the presence of covariate shifts. The paper presents a method and one extension for each, with higher power. Then, the authors present some needed theoretical results and finish with experiments. Strengths: - The paper proposes a theoretically correct approach for testing CI under covariate shift; - The paper presents their algorithms and theoretical results; - The paper had convincing experiments; Weaknesses: - It would be interesting if the authors could estimate the full density ratio in their simulations (possibly in a high-d scenario) and then compare their results with the IS approach; - The paper does not conduct any experiment where Type-I error control is shown in a real dataset, e.g., like for example in [1,2]. References [1] Pogodin, R., Schrab, A., Li, Y., Sutherland, D. J., & Gretton, A. (2024). Practical Kernel Tests of Conditional Independence. arXiv preprint arXiv:2402.13196. [2] Maia Polo, Felipe, Yuekai Sun, and Moulinath Banerjee. "Conditional independence testing under misspecified inductive biases." Advances in Neural Information Processing Systems 36 (2023): 58577-58612. Technical Quality: 3 Clarity: 3 Questions for Authors: - How can we understand the role of the effective sample sizes (weight imbalance) in the effectiveness of your method? - Could you elaborate more on the surrogate variables? I am familiar with the literature on CI testing, but I haven't seen the presence of these variables in other works. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors comment on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our sincere thanks for your reviewing work and insightful comments. Please see our responses to your comments and questions as below. $\textbf{Weakness 1}$: It would be interesting if the authors could estimate the full density ratio in their simulations (possibly in a high-d scenario) and then compare their results with the IS approach. Response: We have added additional experiments with the full density ratio estimated, as shown in Figure R2 of the supplementary PDF file (attached to the global rebuttal response). We chose a high-dimensional setting where the dimension of $Z$ is 50. We fit high-dimensional regression to estimate the joint density ratio $e(X, Z, V)$ and the conditional model $X \sim Z$ with the unlabeled data. The results show that when the sample size is larger than 800, all three methods have good Type-I error control. With a relatively small sample size for estimation, the IS method and the power enhancement version of our method retain good Type-I error rate control, but the vanilla csPCR method may have an inflated Type-I error rate (e.g., when $n_e = 400$, the Type-I error rate of csPCR is 0.108, while the IS method and csPCR(pe) retain 0.052). The statistical power follows a similar pattern as presented in the paper (csPCR(pe) > csPCR > IS) but is uniformly lower for all three methods because of the lower estimation accuracy (e.g., when $\beta = 2$, the power of csPCR(pe) is 0.833, while in our original experiments it is 0.867). $\textbf{Weakness 2}$: The paper does not conduct any experiment where Type-I error control is shown in a real dataset, e.g., like for example in [1,2]. Response: We propose the following steps to validate Type-I error control in the real dataset. (1) We will adjust the cutoff date for segmentation to an earlier point, expanding the target dataset. For example, moving the cutoff date from November 30, 2021, to a mid-2021 date, increases the sample size in the target dataset while still capturing significant changes in the pandemic landscape. With the expanded dataset, we will rerun our csPCR test and calculate the empirical Type-I error rate. This involves comparing the test results against the true outcomes in the expanded target dataset. (2) Additionally, we will generate multiple permuted datasets by randomly shuffling Y while keeping X and Z unchanged. For each permuted dataset, we will apply the csPCR test and record the results to calculate the empirical Type-I error rate. $\textbf{Question 1}$: How can we understand the role of the effective sample sizes (weight imbalance) in the effectiveness of your method? Response: We notice a series of work in measuring the effective sample size (ESS) of importance weight or sampling in the statistical computation literature, e.g., [Martino, et al, 2017] and others. Among them, one of the most common ways is to use the ratio $n_{eff}=(\sum_{i=1}w_i)^2 / \sum_{i=1}w_i^2$ to approximate the ESS. When the covariate shift between the source and target becomes stronger, the variance of the importance weight $w_i$ tends to be large and $n_{eff}$ will become smaller, which can result in lower power. Our power enhancement method based on control variate could potentially alleviate this issue with properly specified control functions. We plan to carry out additional simulation studies on the relationship between the power of csPCR and the effective sample size (affected by the degree of covariate shift) in the camera-ready version of the paper upon acceptance. $\textbf{Question 2}$: Could you elaborate more on the surrogate variables? I am familiar with the literature on CI testing, but I haven't seen the presence of these variables in other works. Response: A surrogate or silver standard label is a variable that is more feasible and accessible than $Y$ in data collection and can be viewed as a noisy measure of $Y$. In clinical trials, $Y$ is often a longer-term outcome; in such settings, surrogates are measures that can predict the effect of a treatment on the longer-term outcome $Y$. These surrogates can be biomarkers or clinical parameters measured relatively quickly, usually within a few weeks or months of starting treatment. For example, in clinical trials, tumor response rate is often used as a surrogate for overall survival, and blood pressure is commonly used as a surrogate for cardiovascular events such as heart attacks. Surrogate variables are also commonly used in environmental studies and economics. Regarding conditional independence testing, Li and Liu (2023) study how surrogate variables can improve the robustness of the Conditional Randomization Test (CRT). They propose a method called Maxway CRT, which leverages knowledge of the distribution of $Y|Z$ to enhance the robustness of the CRT. Surrogate variables are extremely helpful in learning the distribution of $Y|Z$ because there is usually much more data on surrogates than on the outcome variable. Finally, we want to emphasize that without the surrogate variable, even if there is a shift in the distribution of $X$, the original CRT on the source data remains valid for the target population, provided that the distribution of $Y | X, Z$ is assumed to be the same in both the source and target populations. $\textbf{References}$ Martino, Luca, Víctor Elvira, and Francisco Louzada. "Effective sample size for importance sampling based on discrepancy measures." Signal Processing 131 (2017): 386-401. Li, Shuangning, and Molei Liu. "Maxway CRT: improving the robustness of the model-X inference." Journal of the Royal Statistical Society Series B: Statistical Methodology 85.5 (2023): 1441-1470. --- Rebuttal 2: Comment: Thank you for your reply! - W1: Thank you for your new experiment! (this suggestion is related to the ESS question because in high-d we expect the ESS to be lower) - W2: I think the idea is good. When shuffling Y, you will need to do that within each value of Z, correct? If your Z is continuous, you would probably need to use some binning. - Q1: My question was related to the paper you mentioned here by Martino et al. In that paper, they argue that the ESS can be defined as a variance ratio and the implications of that are clear in the IS literature. I was wondering if you could extract some more meaningful relationships here as well. - Q2: Thank you for the detailed explanation! I would probably try to input some introduction on the surrogate variable in the abstract since you mention it but it might not be clear you are going to use them throughout the paper (you are working in this specific setup, which is not the same as the original CRT setup). I have increased my score. --- Rebuttal Comment 2.1: Comment: We express sincere thanks to your insightful comments and positive recognition on our work. $\textbf{W1}$: Thanks for pointing out that an increasing dimension tends to cause higher variational importance weights and lower ESS. We will follow this to design our additional simulation on power v.s. ESS (changing with the variance of importance weights). $\textbf{W2}$: Thanks! When shuffling $Y$, we think both (i) marginal permutation and (ii) conditioning on (the full set or a subset of) Z will cause Y and X to be independent conditional on Z. Thus, we will try both setups and you are correct that we can use binning if Z is continuous and relatively high-dimensional. $\textbf{Q1}$: Thanks for the question! We think the effective sample size of our csPCR estimator (without control variate) can be defined in a similar way as their paper, i.e., the original source sample size * the ratio between the variances of (i) and (ii), where (i) stands for the unweighted indicator on target (which one would obtain if they could observe and use the same amount of labeled sample on target for PCR), and (ii) stands for the weighted indicator variable on source. This ratio can be shown to be smaller or equal to 1, with the “=” holding only when there is no covariate shift between the source and target. Thus, covariate shift will cause loss of the effective sample size to our method and larger degree of covariate shift tends to induce larger variance of the weighted estimator on source and larger loss of the ESS. This interpretation is quite analog to that in IS literature. Further, we can write down the ESS for our power enhanced (PE) estimator and apply our Theorem 2 to show that it is larger than the ESS of csPCR without PE. $\textbf{Q2}$: Thanks for the suggestion and we will highlight more on the specific setup with surrogate in the abstract and introduction section!
Summary: This paper introduces the Covariate Shift Corrected Pearson Chi-squared Conditional Randomization (csPCR) test, designed to address covariate shift in conditional independence testing. The csPCR method incorporates importance weights and employs the control variates method to enhance test power and reduce variance in the statistical analysis. Theoretical contributions demonstrate that csPCR controls Type-I error asymptotically. Empirical validations through simulation studies and a real-world application assessing COVID-19 treatment effectiveness showcase the method's effectiveness. Strengths: 1. **Theoretical contribution:** The paper makes a theoretical contribution by addressing the covariate shift in conditional independence testing. 2. **Methodological Innovation:** Introduction of the Covariate Shift Corrected Pearson Chi-squared Conditional Randomization (csPCR) test that incorporates importance weights and control variates method to manage covariate shifts effectively. 3. **Empirical Validation:** Extensive simulation studies and real-world application (assessment of COVID-19 treatment on 90-day mortality) demonstrate the practical efficacy and superior power of the proposed csPCR test over traditional methods. Weaknesses: 1. **Dependence on Accurate Estimations of Density Ratio:** The csPCR test's performance is critically dependent on the accurate estimation of density ratios. This dependence could pose significant challenges in practical scenarios characterized by limited, noisy, or high-dimensional data, where reliable density ratio estimation becomes inherently difficult. 2. **Clarity on the Advantage Over Resampling Methods:** The paper does not explicitly clarify the advantages of the csPCR test over simpler resampling-based methods that also utilize estimated density ratios. While the csPCR integrates density ratios directly into the test statistic calculation and employs variance reduction techniques, it remains crucial for the authors to demonstrate why these features offer substantial improvements over traditional methods that adjust the sample weights during resampling. The discussion should address whether the complexity of csPCR provides tangible benefits, such as improved error rates or robustness in more varied practical applications, compared to potentially simpler resampling approaches. Technical Quality: 3 Clarity: 3 Questions for Authors: - Could the authors provide a detailed comparison between the csPCR test and traditional resampling methods that also use estimated density ratios? This would further strengthen the paper - The paper mentions the use of control variates to reduce variance introduced by importance weights. Could the authors discuss how this approach compares to variance reduction techniques used in resampling methods? - Could the authors elaborate on any theoretical limitations or potential failure modes of the csPCR method? - Are there additional empirical validations planned or underway to test the csPCR method across more varied datasets? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors are encouraged to add a separate limitation section in either the main paper or appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Weakness}$ 1: Dependence on Accurate Estimations of Density Ratio \& $\textbf{Question 3}$: Theoretical Limitations? Response: We believe our method's limitation lies mainly when the model-X assumption fails, i.e., when the distribution of covariates cannot be accurately learned. In such settings, Type-I error inflation may occur. The robustness of CRT and PCR tests under this condition has been studied extensively [Berrett et al., 2020; Javanmard and Mehrabi, 2021; Li and Liu, 2023]. Theoretical upper bounds for Type-I error inflation have been given; moreover, simulation studies show that CRT and PCR tests are usually more robust than these bounds suggest. For our csPCR test, we theoretically believe we can show similar results to those in Section 6 of [Javanmard and Mehrabi, 2021]. Empirically, we conducted simulation studies (some included in our submission, plus new ones) when the covariate distribution is estimated from data: Our studies include two scenarios: (i) one has the knowledge of $P(X\mid Z)$ and needs to estimate $P(V \mid X, Z)$; (ii) one has to estimate the full $P(X, V, Z)$ without any knowledge. For (i), please refer to Figure 3 for our numerical results showing that even with moderate sample sizes used for learning $P(V \mid X, Z)$ (e.g., $n_e=500$, equal to the labeled sample size used for testing), the csPCR test still maintains good Type-I error control (nearly no inflation above 0.05). For the more challenging scenario (ii), we have added additional experiments with the full distribution density (or density ratio) estimated using the unlabeled samples. The results can be found in Figure R2 of the supplementary PDF file (attached to the global rebuttal response). In this case, our csPCR(pe) approach can still achieve Type-I error control at $n_e=400$ as well as good power, and the vanilla csPCR shows proper Type-I error control at $n_e=700$, which is still not large compared to the testing sample size of 500. $\textbf{Weakness 2}$: Advantage Over Resampling Methods & $\textbf{Question 1}$: Comparison with traditional resampling methods? Response: First, we would like to clarify that we are benchmarking against the specific DRPL resampling method (referred to as IS in our paper) proposed in [Nikolaj et al., 2023]. This approach is proposed for the general purpose of testing under covariate shift and is, to our best knowledge, the only existing strategy for conditional independence testing under covariate shift. It is essentially different from traditional bootstrap resampling or importance sampling procedures. Specifically, IS performs resampling without replacement and typically has to sample a much smaller subset (theoretically, in the order of $o(\sqrt{n})$) of the source data to approximate the target. Consequently, the power of IS is substantially lower than our approach. If the resample size of IS is overly increased, it may fail to control the Type-I error due to excessive similarity between the resampled data and the original source data. To further illustrate, we conducted additional experiments with varied resample sizes in IS to assess its effect on Type-I error control and power; see Figure R1 in the supplementary PDF file (attached to the global rebuttal response). One can observe that IS starts to show high Type-I error inflation when its resample size increases to 400 but still shows much lower power (by around 0.4) than our method with this resample size (or even larger ones). This indicates that our method achieves better statistical efficiency than IS (DRPL). We also find that similar results hold regardless of whether the density ratio and model of $X$ are known or estimated. $\textbf{Question 2}$: Variance reduction techniques used in resampling methods? Response: As highlighted in our response to your Question 1, the IS method is essentially different from traditional resampling methods. The authors in [Nikolaj et al., 2023] have not proposed any variance reduction methods such as control variate, and we do not see a natural way to accommodate control variate in their framework. Therefore, we do not see a feasible way to make a direct comparison. $\textbf{Question 4}$: Additional empirical validations? Response: We plan to add the following new experiments with the real-world data: 1. Expansion of the target dataset. We will adjust the cutoff date for segmentation to an earlier point, expanding the target dataset. For example, moving the cutoff date from November 30, 2021, to a mid-2021 date, increases the sample size in the target dataset while still capturing significant changes in the pandemic landscape. 2. With the COVID data, we will generate multiple permuted datasets by randomly shuffling the outcome variable Y while keeping the treatment variable X and covariates Z unchanged. This process ensures that the relationship between Y and X, given Z, is broken, simulating the null hypothesis for use to test the robustness of our method in Type-I error control. 3. Different outcomes. In addition to readmission, we will evaluate different outcomes such as mortality. Specifically, we will analyze mortality within 30 and 90 days of hospital admission due to COVID-19. $\textbf{References}$ Berrett, Thomas B., et al. "The conditional permutation test for independence while controlling for confounders." Journal of the Royal Statistical Society Series B: Statistical Methodology 82.1 (2020): 175-197. Javanmard, Adel, and Mohammad Mehrabi. "Pearson chi-squared conditional randomization test." arXiv preprint arXiv:2111.00027 (2021). Li, Shuangning, and Molei Liu. "Maxway CRT: improving the robustness of the model-X inference." Journal of the Royal Statistical Society Series B: Statistical Methodology 85.5 (2023): 1441-1470. Thams, Nikolaj, et al. "Statistical testing under distributional shifts." Journal of the Royal Statistical Society Series B: Statistical Methodology 85.3 (2023): 597-663. --- Rebuttal 2: Comment: We extend our sincere thanks for your reviewing work and insightful comments. Please see our responses to your comments and questions as below in the rebuttal. --- Rebuttal Comment 2.1: Comment: Thanks for your detailed response. Most major concerns have been solved. I would like to keep my original rating. --- Rebuttal 3: Title: Reminder to engage in the discussion Comment: Hello reviewer MwvS. The reviewer-author discussion period is between Aug 7-13 (AoE). Please read the authors’ rebuttal, see if it addresses your questions, and engage in a discussion with the authors. You are strongly encouraged to read the official reviews posted by other reviewers. Some of your questions may have already been answered. **You must acknowledge the authors’ rebuttal and give a reason for why it did or did not address your concerns. If the rebuttal does not change your rating, please also state the reason.** Thank you for your services.
Summary: This paper addresses the issue of conditional independence under covariate shift. The authors' goal is to test the conditional independence for causal inference in the target data. The authors can use the source data whose distribution is potentially different from that of the target data. For this problem, the authors propose using the covariate-shift adaptation method and develop a method for testing the null hypothesis. Strengths: This paper addresses the issue of conditional independence under covariate shift, which I find to be an intriguing problem. However, I was unable to comprehend the underlying assumptions of this problem, making it difficult to proceed with the reading. I understand the objective of this manuscript to be as follows: 1. The authors aim to test the null hypothesis $H_0: X \perp Y \mid Z$ for the target data (Here, I denote the independence by $\perp$). 2. There is a possibility that $X \perp Y \mid Z$ does not hold simultaneously for both the source and target data. 3. To test the null hypothesis for the target data, the authors employ an algorithm adapted to covariate shift. My primary concern is whether the assumption "there is a possibility that $X \perp Y \mid Z$ does not hold simultaneously for both the source and target data" is justified in the first place. The authors assume that, in order to use the covariate shift adaptation algorithm, - $Y \mid X, Z, V$ is the same in both the source and target data. Under this assumption, is it not the case that the situation where $X \perp Y \mid Z$ holds in one dataset and does not hold in the other cannot exist? $X \perp Y \mid Z$ implies that $p(y, x \mid z) = p(y \mid x, z) p(x \mid z) = p(y \mid z) p(x \mid z)$, meaning that $p(y \mid x, z) = p(y \mid z)$. Given that we assume $Y \mid X, Z, V$ to be the same in both the source and target data, does this not imply that whether $X \perp Y \mid Z$ holds cannot differ between the source and target data? Due to this ambiguity, I was unable to further evaluate the paper. I would like the authors to clarify this point so that I can proceed with the evaluation. Weaknesses: See above. Technical Quality: 2 Clarity: 2 Questions for Authors: See above. Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our sincere thanks for your reviewing work and important comments. We believe your main confusion lies in the existence of V and the fact that $P(V \mid X, Z)$ can be different between the source and target. This can possibly cause the situation that $H_0: X \perp Y \mid Z$ does not hold simultaneously on the source and target (note that V is an auxiliary feature not included in the hypothesis of our primary interest). For illustration, consider the following simplified example: Let $X \sim N(0, 1)$ and $Z = X + \epsilon_z$ on both source and target populations, where $\epsilon_z$ is an independent noise term. For the source, let $V_{\mathcal{S}} = -X_{\mathcal{S}} + \epsilon_v$, while on the target, let $V_{\mathcal{T}} = \epsilon_v$, where $\epsilon_v$ is a noise term independent of X and Z. On both source and target, let $Y = X + Z + V + \epsilon$, corresponding to our assumption that $P(Y \mid X, Z, V)$ holds the same between the source and target. In this case, one can derive that (i) on the source, $Y_{\mathcal{S}} = Z + \epsilon_v + \epsilon$; on target, $Y_{\mathcal{T}} = X + Z + \epsilon_v + \epsilon$. Thus, $X \perp Y \mid Z$ holds on the source but not on the target, which underscores the importance of our setup and method. A more general data generation setup can be found in Figure 2 of our paper. In this diagram, $V$ could be interpreted as a mediator or early endpoint surrogate seen in real-world studies. Taking clinical trials as an example, suppose we are interested in testing the treatment effect of some drug $X$ on some long-term outcome $Y$ such as five-year survival, adjust for some baseline confounder $Z$. In this case, researchers collect $V$ as some early endpoint surrogate that can be measured within a short term, usually a few weeks or months post treatment, e.g., tumor response rate in cancer treatment. Thus, $V$ is easier to collect and also informative to $Y$. Then it is reasonable to assume that $P(Y\mid X,Z,V)$ is shared by the source and target populations while $P(V\mid X,Z)$ has a distributional shift between the two populations [Kallus and Mao, 2020]. Please find other real-world examples in our response to Weakness 1 from Reviewer REEU. We hope this clarification addresses your concerns and enables you to proceed with the evaluation. Thank you! $\textbf{References}$ Kallus, Nathan, and Xiaojie Mao. "On the role of surrogates in the efficient estimation of treatment effects with limited outcome data." arXiv preprint arXiv:2003.12408 (2020). --- Rebuttal 2: Title: Re: Rebuttal by Authors Comment: I appreciate the authors' reply, which has deepened my understanding of your contributions. It seems I might have caused some misunderstanding with my previous question. Here is what I intended to ask: Objective of this study: To investigate whether conditional independence holds in the target data. Assumption in this study: Covariate shift = meaning that P(Y | X, Z, V) remains unchanged between the source data and the target data. Under this situation, wouldn't testing only the source data suffice to achieve the objective, given the assumption? I wanted to confirm whether I misunderstood the problem setting. --- Rebuttal Comment 2.1: Comment: Thank you for your reply and clarification! You are correct that our goal is to test $X \perp Y \mid Z$ on the target data, with the assumption that $P(Y \mid X, Z, V)$ remains consistent between the source and target domains. However, even if $P(Y \mid X, Z, V)$ is the same for both the source and target, $P(V \mid X, Z)$ can still differ between them. As we demonstrated in our example in the rebuttal, this difference in $P(V \mid X, Z)$ led to $X \perp Y \mid Z$ holding true on the source data but not on the target. If we only test on the source data, this discrepancy can significantly inflate the Type-I error rate, as illustrated in Figure 1 of our paper. Please Let us know if you have any further concerns. --- Rebuttal 3: Title: Re: Official Comment by Authors Comment: Thank you for your response. I have gained a deeper understanding of the authors' contributions. However, to be honest, I am not entirely clear about the motivation behind this study. If the authors assume that the conditional distributions are equal between the source and target data, wouldn't it suffice to test only for the source data? I am not convinced about the necessity of concerning with the Type 1 error in test for the target data. In the other words, it is unclear for me why we conduct test for the target data. I also have doubts about the claim that the Type 1 error changes. I think this is due to covariate shift, but if we consider covariates as non-stochastic, wouldn't the Type 1 error remain unchanged? My background is actually in economics. While the authors provide examples from economics, in social science data analysis, it is common to estimate models using only the source data and then conduct counterfactual simulations on the target data under the assumption that the conditional distributions are equal. That is, it is enough to care only about the model on the source data, and we can transfer the results under the assumption that the conditional distribution is invariant. Given these typical analysis procedures, I find the problem setting somewhat difficult to understand. I do not oppose acceptance, but I have some reservations about the problem setting, so I will keep my current score. In the future, since this problem setting may not be intuitively accepted, it might be worthwhile to focus more on justifying the problem setting itself more rather than on the technical issue of controlling the Type 1 error.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers and chairs for their feedback. We have addressed each of the reviewers' points individually and have included a PDF with additional experiments. The first of the experiments demonstrated that our proposed csPCR method has more stable Type I error rate control and higher power without the need for tuning the resample size compared to the IS method. The second experiment shows that when estimating the full density ratio, the vanilla csPCR may suffer from slight Type I error inflation, while other results show similar patterns to our original experiments. The third experiment shows that with the different choice of test statistic $Y(X - E[X|Z])$, both the Type I error rate and power do not change significantly. Pdf: /pdf/56ef9cb53bcd95f62d7d4bb84f11f7232ae6640e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Differential Privacy in Scalable General Kernel Learning via $K$-means Nystr{\"o}m Random Features
Accept (poster)
Summary: The paper proposes differentially private scalable kernel ERM and KME algorithms that are more general than the prior works. The authors provide privacy and convergence proofs for their proposed approaches, and experimentally compare their approach with prior works. Strengths: - Thorough discussion on background and comparison with prior works - Detailed theoretical proofs for the proposed approach - Experimental evaluation of the proposed approach Weaknesses: None that I could find, although I'm not very familiar with the topic. Technical Quality: 3 Clarity: 2 Questions for Authors: None Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: I do not feel there are any negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's positive feedback.
Summary: The paper studies scalable kernel learning algorithms under differential privacy. First, the authors propose an algorithm for DP K-means Nyström approximation to obtain an orthonormal basis and their corresponding random feature map. Then, the authors use the basis and feature map for kernel ERM algorithm and kernel mean embedding algorithm. The paper presents both theoretical and empirical evaluations of the proposed methods, demonstrating their performance and superiority over existing approaches. Strengths: The proposed method is novel and practical. Both theoretical results and experiments are provided. Weaknesses: 1. The quality of the Nyström approximation depends on the DP-K-means method. However, the proposed method is based on an existing library. 2. This paper claims to provide a scalable solution. However, the experiments are somewhat limited in scope. (adult dataset) Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the allocation of the privacy budget between different components of the algorithm (e.g., K-means, DP ERM) affect the overall performance? Could we improve the performance by choosing privacy budget more carefully? 2. The proposed methods claim to offer improved scalability compared to existing DP kernel learning algorithms. Can you provide a detailed comparison of the complexities of your algorithms versus existing methods? Specifically, how does the time/sample complexity scale with the size of the dataset and the dimensionality of the data? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** Contributions While it is true that our algorithms utilize existing private learning schemes, we emphasize two novel contributions and one practically critical point: First, while the Nyström-based method is a known approach, applying it under privacy constraints is a novel and significant contribution. Existing scalable private kernel learning algorithms typically combine random Fourier features with private linear ERM algorithms, which was inadequate for private general kernel learning. We found that integrating the Nyström-based scheme with private linear ERM is effective, marking the first application of the Nyström-based scheme in the context of differential privacy. Second, implementing an effective Nyström-based scheme under privacy constraints is another key contribution. Various Nyström-based schemes have differing quality under privacy constraints. As shown in Figure 1(b) in the paper, the solid lines show the accuracy of private KME estimation using Nyström-based schemes implemented by various landmark selection methods. The standard subsampling-based Nyström method (gray line) performs poorly under privacy constraints. Our discovery that the DP $K$-means landmark selection is effective underlines the importance of investigating clustering-based Nyström methods under privacy constraints. Finally, our composition framework is a practical strength, enabling adaptive design of private learning algorithms in various settings. Existing private linear ERM algorithms are tailored to specific scenarios, such as convex or non-convex loss, smooth or non-smooth loss, differentiable or non-differentiable regularizer, etc. Since the composition framework only requires the subroutines to be differentially private, it allows one to privately learn various models by substituting private linear ERM algorithms for the corresponding setting. **W2.** Scalability While the scalability of our algorithm is discussed in terms of computational complexity in line 240, we will provide additional experimental results to address the reviewer's concern in the revised manuscript, if the paper is accepted. For instance, Figure 2 in the attached PDF demonstrates the utility of our algorithm for KME estimation across multiple datasets, with the Gaussian and polynomial kernels. We have conducted similar experiments on additional datasets and obtained consistent results. This additional experiment confirms that the superiority of the proposed algorithm is not confined to the adult dataset alone. **Q1.** Privacy budget allocation We agree that optimizing privacy budget allocations can enhance the effectiveness of private learning. Figure 1 in the attached PDF illustrates that there is a budget combination that outperforms others. One possible explanation is that the optimal allocation may depend on the strength of the clustered structures in the data. For instance, when the number of landmarks $m$ is small relative to the sample size $n$, it is advisable to allocate a larger portion of the privacy budget to the linear ERM rather than the DP $K$-means. This is because a smaller $m$ typically results in larger cluster sizes. Since DP $K$-means algorithms acquire private centroids by averaging the members of each cluster privately, larger clusters tend to lead to more accurate centroids given fixed privacy budgets. Therefore, we can afford to allocate more privacy resources to linear ERM. Further exploration to connect the cluster structure to the quality of the private landmarks is suggested as future work. **Q2.** Comparison of complexities We can provide a comparison of the time complexity of our algorithms versus existing methods for kernel ridge regression. Note that the time complexity of kernel ridge regression in the non-private setting is $O(n^3)$, arising from the inversion of the $n\times n$ kernel matrix. The time complexity of our DP kernel ERM algorithm (Algorithm 1) for kernel ridge regression is $O(nm^2+nmd)$ as given in line 240. The $nmd$ term is the time complexity of the DP $K$-means algorithm, and the $nm^2$ term is the time complexity of the linear regression for $n$ observations of $m$ dimensions with $m<n$. Other candidates, such as the functional noise adding algorithm in Hall et al. (2013) and Algorithm 3 in Jain et al. (2013), have different complexities. The former has a time complexity of $O(n^3)$ since it adds noise for the privacy guarantee after evaluating the non-private model, equating it to the time complexity of non-private kernel ridge regression. The latter has a time complexity of $O(n^d)$. Although we have only compared the time complexity of DP kernel ERM algorithms for kernel ridge regression, we note that the superiority of our algorithm extends to other kernel ERMs as well. Our algorithm reduces the time complexity by transforming an optimization problem of $n$ observations of $n$ dimensions to $n$ data of $m$ dimensions. However, the algorithm in Hall et al. (2013) requires solving the original optimization problem. Thus our algorithm will be more scalable than Hall et al. (2013) for general ERM. Also, the algorithm in Jain and Thakurta (2013) has a time complexity of $O(n^d)$ for general ERM. Finally, for the KME estimation (Algorithm 3), the time complexity of our algorithm is $O(nm^2+nmd)$, whereas the existing DP KME estimation algorithm suggested in Balog et al. (2018) has a time complexity of $O(nm^2)$. In this case, our algorithm can be slower than Algorithm 1 in Balog et al. (2018). However, our algorithm demonstrates superior performance compared to Algorithm 1 in Balog et al. (2018), as shown in Figure 1(b) and discussed in lines 318-322. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. My concerns regarding W1, Q1, Q2 are adequately addressed. However, I still have some reservations about the scalability results. In the provided PDF, the performance of the proposed method does not seem as good as the ADULT dataset, especially for the 4th column MNIST. MNIST is relatively small compared to many modern large-scale datasets used in practical scenarios, these results could raise concerns about the claimed scalability. Thus, my score remains unchanged.
Summary: This paper considers the problem of differentially private kernel learning. The main idea of proposed algorithm is to approximate the kernel matrix using the Nystrom kernel embeddings. The landmark points on which the approximation is built are chosen as the centroids given by k-means algorithm on data based-on the relationship between the kernel approximation error and k-means problem. Given the Nystrom kernel embeddings, the authors propose algorithms for three tasks: DP-ERM, kernel mean embeddings, and data release. Strengths: - The authors make an important observation about the relationship between the kernel matrix approximation error and k-means problem. Based on the observation, they designs an algorithm that releases approximate feature map under differential privacy. - The released DP Nystrom approximation is useful as it can be used for other tasks. The authors demonstrate how the released feature map can be used for other tasks including ERM, hypothesis testing, and data release. - The paper addresses practical issues in applying DP kernel learning methods: scalability, ability to use general kernels and objective functions, and test data-free approach. Weaknesses: - While the proposed Nyström kernel embedding approach is versatile, its utility for a single task might be lower that that for the algorithms entirely dedicated for the task. Unfortunately, empirical evaluations provided in Section 4 do not provide systematic comparison of utility with existing approaches for each task. - The equal privacy budget split between Nystrom approximation and the linear ERM task seems arbitrary. Since Theorem 1 and 4 allows to express the excess risk of kernel ERM algorithm (Algorithm 2) as the sum of two errors, each of which can be expressed as a function of ϵ. It might be possible to find a more advanced approach for the privacy budget allocation. - When it comes to privatization method, the proposed algorithm seems incremental as its privacy relies on the privacy of subroutines. Technical Quality: 4 Clarity: 3 Questions for Authors: - While Eq. (2) shows the approximation error of kernel matrix is similar to that of k-means, it seems that the set Z in here can be viewed as an optimization variable. Is it possible to formulated the problem as a bi-level optimization? - Algorithm 1 embeds the landmark points into functions in a reproducing kernel Hilbert space (RKHS) and releases the random feature map along with the bases. Line 5 in Algorithm 5 computes the scale factor R. Is R computed independent of data? - In line 3 of Algorithm 1, the algorithm samples the landmark points from distribution Q when $K < m$. Although the authors mentioned that this distribution Q can be arbitrarily chosen, it is not intuitive and they should be landmark points representative of the underlying dataset. How does the choice of distribution Q affect the results? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Due to the use of $K$-means as a sub-routine, the proposed approach inherits the limitations of $K$-means, for example, difficulty of handling datasets with mixed data types and curse of dimensionality. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1.** The utility of the versatile method We acknowledge that the utility of learning from privatized data for versatile DP kernel ERM may be inferior compared to a DP kernel ERM algorithm dedicated to a specific task. For instance, when the RKHS of a given kernel has a finite dimension $d$, kernel regression via privately released data (from our method) has an excess empirical risk of $O(\sqrt{d}n^{-\frac{1}{2}})$. In contrast, the DP algorithm specifically designed for kernel regression can achieve an excess empirical risk of $O(d^2n^{-2})$ (Chaudhuri et al., 2011), which is faster when $d<n$. However, we point out that, in practice, it is common not to have a specific learning method in mind beforehand, and multiple methods are often experimented with. In such scenarios, our proposed approach can guarantee privacy for various learning problems, unlike DP algorithms tailored for a specific task. **W2.** Privacy budget allocation We agree that optimizing privacy budget allocations can enhance the effectiveness of private learning. Figure 1 in the attached PDF implies that the 50-50 allocation may not always be the best. A possible heuristic rule is to allocate the budget depending on the strength of the clustered structures in the data. For instance, when the number of landmarks $m$ is small relative to the sample size $n$, it is advisable to allocate a larger portion of the privacy budget to the linear ERM rather than the DP $K$-means. This is because a smaller $m$ typically results in larger cluster sizes. Since DP $K$-means algorithms acquire private centroids by averaging the members of each cluster privately, larger clusters tend to lead to more accurate centroids given fixed privacy budgets. Therefore, we can afford to allocate more privacy resources to linear ERM. A deeper exploration of the connection between the cluster structure and the quality of the private landmarks is suggested as future work. **W3.** Contributions While it is true that our algorithms utilize existing private learning schemes, we emphasize two novel contributions and one practically critical point: First, while the Nyström-based method is a known approach, applying it under privacy constraints is a novel and significant contribution. Existing scalable private kernel learning algorithms typically combine random Fourier features with private linear ERM algorithms, which was inadequate for private general kernel learning. We found that integrating the Nyström-based scheme with private linear ERM is effective, marking the first application of the Nyström-based scheme in the context of differential privacy. Second, implementing an effective Nyström-based scheme under privacy constraints is another key contribution. Various Nyström-based schemes have differing quality under privacy constraints. As shown in Figure 1(b) in the paper, the solid lines show the accuracy of private KME estimation using Nyström-based schemes implemented by various landmark selection methods. The standard subsampling-based Nyström method (gray line) performs poorly under privacy constraints. Our discovery that the DP $K$-means landmark selection is effective underlines the importance of investigating clustering-based Nyström methods under privacy constraints. Finally, our composition framework is a practical strength, enabling adaptive design of private learning algorithms in various settings. Existing private linear ERM algorithms are tailored to specific scenarios, such as convex or non-convex loss, smooth or non-smooth loss, differentiable or non-differentiable regularizer, etc. Since the composition framework only requires the subroutines to be differentially private, it allows one to privately learn various models by substituting private linear ERM algorithms for the corresponding setting. **Q1.** Bi-level optimization. It is possible to tune $Z$ through optimization, and similar approaches have been explored in non-private settings. However, there are important issues to deal with. The optimization objective is neither convex nor Lipschitz, and involves many variables (i.e., landmarks). Note that most literature on private optimization assumes a convex or at least Lipschitz objective. Nevertheless, we believe that exploring private landmark selection through the suggested optimization approach could be an interesting direction for future research. **Q2.** The scale factor R The scale factor $R$ is defined in line 5 of Algorithm 1, and this definition is consistent throughout all subsequent discussions. It is a parameter that depends on the kernel function $k$ and the data space $\mathcal{X}$, and is independent of the individual data. **Q3.** The effect of $Q$ In what follows we explain why the landmark points are sampled from $Q$ rather than using centroids or other possible representatives of the underlying dataset. Naturally, it would be preferable to choose landmark points that represent the underlying dataset accurately rather than using random samples. However, when $m$ is too large, the accuracy of the private centroid estimates degrades since the size of each cluster shrinks, which makes centroid estimates (i.e., sample averages) vulnerable to noise for privacy guarantees. Such inaccurate estimates would result in ineffective landmark points. Therefore, we selected $K(<m)$ representatives of the underlying dataset by using $K$ private centroids $z_1,\ldots,z_K$, and chose the remaining $m-K$ landmark points from some distribution $Q$. Thus, $Q$ is intended to enhance the quality of the landmark points by adopting information from the DP $K$-means. `$Q$ can be arbitrarily chosen' means that the choice of $Q$ does not affect the privacy of the algorithm as long as $Q$ does not directly reference the data. Although we used a truncated normal centered at the private cluster centroids, we observed that the results from a different choice of $Q$ (e.g., uniform) are insignificant. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response to my questions and explanation on the challenges of applying the Nystrom method under differential privacy. I would like to keep my current rating of 5.
null
null
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their constructive feedback. We have addressed all comments to the best of our ability. Detailed point-by-point responses to the reviewers' comments are provided below. Additionally, please see the attached one-page PDF for further experimental results. Pdf: /pdf/36f93e896206ffd0289d4aa7b038dc04b4caf91d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Latent Feature Mining with Large Language Models
Reject
Summary: This paper proposes a framework to augment latent features from observed features, with the help of LLM. They frame the problem as a text-to-text reasoning problem. The method can be adapted to different domains easily. The method is also validated with a real world dataset. Strengths: Overall, the presentation and logic flow are smooth and clear. The methodology is also reasonable to me. And their experiments also validate the effectiveness of their method. Weaknesses: The key concern for me is that when using the LLM for inference and text generation, I worry about the social bias and fairness of the problem. Some research has shown that LLM is still biased in some sense, can the author conduct some evaluation on whether the latent feature is biased towards some sensitive attributes like race, gender, etc? The other thing is that I wonder how much human labor effort and expert labor effort will be needed to have the latent features. Typo in line 203 Technical Quality: 3 Clarity: 3 Questions for Authors: In line 166, how do you determine the number l ? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for your comments and feedback. Here are our responses to your questions:** --- - **Question 1: Suggestion on adding experiments about social bias and fairness of using LLMs for inference.** - **Response:** We appreciate your concern about the bias and fairness of using LLMs for inference. Following your suggestion, we conduct **additional experiments to validate the LLMs’ inherent bias is not carried into the inference process.** - ***Experiment setting***: Determine if the reasoning process within generated texts exhibits biases related to social-economic status or risk assessment, specifically racial biases. - ***Experiment results:*** - For social-economic status: We implemented a pretrained keyword extraction model YAKE [1] to search for racial terms in the reasoning steps of the text, with results indicating that such keywords were not found, suggesting **no explicit racial bias in this context**. - For risk level assessment: we closely examined the race distribution in the ground-truth data versus the distribution in the predictions made by the model. The analysis revealed that the race distributions between the ground-truth and the predicted outcomes are similar. This similarity suggests that **the model does not introduce additional racial biases in its predictions** and reflects the distributions present in the input data accurately. --- - **Question 2: How much human labor effort and expert labor effort will be needed to have the latent features?** - **Response:** To obtain latent features, there are two stages that need human expert input. - **The first stage involves developing foundational guidelines** (i.e., the standard solutions to guide LLMs). We propose a crafting rationales strategy to enhance the efficiency of this step, which simplifies the creation of baseline rationales, ensuring that the framework for latent feature extraction is both robust and effectively grounded (*please see page 6, under "In the second sub-step...*). This stage only requires expert knowledge, **minimum labor.** Moreover, the expert input required is not labor-intensive but rather experience-based. For instance, we successfully applied the same framework to a different domain (healthcare) in a short timeframe, indicating that the process of adapting the trained LLMs to new domains is efficient and not overly demanding in terms of labor. - **The second stage requiring human involvement is the validation phase**, which occurs as step 2 in our process. Although this stage has been designed to be as streamlined as possible, it still requires human oversight to ensure the accuracy and relevance of the features being extracted. However, this process has been optimized to **minimize labor**, focusing on quality control without demanding extensive time from our experts. We note that **our framework requires significantly less than what is traditionally required in human annotation approaches** for latent feature mining, yet this efficiency does not compromise the quality of the outcomes. --- - **Question 3: How to infer the first intermediate predicate ?** - **Response:** From our interpretation of your question, we assume that you are asking how the first intermediate predicate (P_1) be inferred. By leveraging domain expertise, we formulate initial hypotheses or predicates about potential relationships between features. This relates to our above response to your question 2, where the first stage of our framework requires human expert opinion to develop foundational guidelines (i.e., the standard solutions to guide LLMs). --- **We appreciate your comments and questions, and hope this response addresses your questions. We look forward to any further feedback you may have.** ---- [1] Campos, R., Mangaravite, V., Pasquali, A., Jatowt, A., Jorge, A., Nunes, C. and Jatowt, A. (2020). YAKE! Keyword Extraction from Single Documents using Multiple Local Features. In Information Sciences Journal. Elsevier, Vol 509, pp 257-289. --- Rebuttal Comment 1.1: Comment: Thank you for the thoughtful rebuttal. I will be maintaining my current score.
Summary: The authors propose a unique form of LLM data-augmentation that attempts to generate informative latent variables to improve downstream tasks. They do this by transforming the latent feature mining task into a text-to-text propositional reasoning task. Validation is performed with a case study in the criminal justice system and latent features align well with ground truth labels + significantly enhance downstream classifier performance. Strengths: - Clear, well written paper with descriptive diagrams - Using LLMs to infer latent space in this way seems to be a novel idea Weaknesses: - Type on line 203: "whic serve" - Potential for LLM biases in the latent variable finding. E.g. Marijuana usage does not necessarily require Substance Abuse Treatment. - Lack of evaluation on multiple datasets/domains and no publically released code - Lack of ablations exploring generalizability with x\% features removed Technical Quality: 2 Clarity: 2 Questions for Authors: See weakness - How does this compare with other forms of data augmentation? - How does this compare with the traditional methods of obtaining latent space representations -> VAE, etc? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for your valuable feedbacks and comments, here are our response to your questions:** - **Question1: Concern about potential biases from LLMs in latent variable finding.** - **Response:** Thank you for highlighting this concern. **Our framework aims to minimize biases by leveraging domain-specific data and expert input during the fine-tuning processes.** We carefully design rules based on domain knowledge and expert input. For example, the client shown in Figure 1 primarily uses marijuana but has a property offense as the admitting offense. Our community partner noted that marijuana usage often correlates with other substance use (though not recorded in the admitting offense) and that “the prevention of heavy marijuana use could potentially reduce property crime in the future” [1]. Therefore, LLMs, following expert guidance, add substance abuse treatment to the potential requirement list. As addressed in another comment by Reviewer U9i4, we conducted a thorough sanity check, showing that **LLMs trained in our framework do not amplify bias but adhere to domain expert principles** (the “standard solutions” provided for training). While bias may be inherent in input data, **our framework ensures transparency in inference and prediction, aiding result interpretation and bias correction through human-in-the-loop processes**, which allows domain experts to adjust and refine the model’s reasoning pathways. Consequently, inferred latent features align with nuanced real-world outcomes rather than broad statistical correlations. - **Question 2: Can you add experiments on other domains/dataset to prove generalizability.** - **Response:** We have conducted additional experiments in the healthcare domain with the MIMIC-IV dataset to prove the generalizability of our framework. Here are the experiment details and results: - **Dataset:** The MIMIC dataset is a comprehensive dataset containing detailed de-identified patient clinical data and is widely used for various prediction tasks in the machine learning literature. - **Task Description:** The discharge location prediction task uses patient data to forecast the most likely discharge destination, aiding hospital management in preparing for discharges. We leverage our framework to introduce a "social support" feature, which captures the healthcare and community support available to the patient. This involves repeating the four-step process of our framework: - ***Step 1.*** Create rationale: we leverage domain expertise in hospital inpatient management and patient flow to help us craft rationales to infer social support. - ***Step 2.*** Enlarge synthetic data for LLM training: similar to our approach for the outcome prediction task using the criminal justice data (task 2 mentioned in the main paper), we use GPT-4 to generate 4500 data points to fine-tune GPT-4o-mini. - ***Step 3 & 4.*** The remaining two steps are the same as those used for the outcome prediction task (task 2 mentioned in the main paper). - **Experiment Setting:** Due tack the ground-truth label for the latent variable "social support," making our experiments akin to Task 2 in the main paper. After generating the latent features, we train four machine learning classifiers: Logistic Regression (LR), Multilayer Perceptron (MLP), Gradient Boosting Trees (GBT), and Random Forest (RF). Training is conducted with and without latent features for comparison. Each experiment is run five times with different random seeds, and the results are averaged to ensure reliability. - **Experiment Results:** The table below demonstrates the experiment results, showing an average improvement of approximately 8.64% in accuracy and 8.64% in F1 score when latent features are added to the models. This is similar to the percentage increase reported in Table 4A in the main paper. | Model | Accuracy (std.) | F1_score (std.) | |:----------:|:-----------------:|:-----------------:| | LR | 65.22% (0.01) | 65.46% (0.01) | | MLP | 63.19% (0.02) | 63.19% (0.02) | | GBT | 64.84% (0.01) | 65.09% (0.01) | | RF | 65.11% (0.01) | 65.44% (0.01) | | LR w/ LF | **71.22% (0.01)** | **71.26% (0.01)** | | MLP w/ LF | **74.40% (0.01)** | **74.50% (0.01)** | | GBT w/ LF | **75.56% (0.02)** | **75.38% (0.02)** | | RF w/ LF | **75.31% (0.01)** | **75.22% (0.01)** | The results demonstrate **another strong evidence of using our framework to improve downstream prediction power with the addition of latent features**. Connecting to Lemma 1, our experiment results show that the added features are informative – likely because the human experts (case managers or physicians) are making decisions based on more than the explicitly recorded features (X) in the dataset. - **Question 3: How does this compare with other forms of data augmentation?** - **Response:** Our framework leverages LLMs to augment observed features in datasets with latent features. This approach differs from traditional data augmentation methods like VAE, as we augment the dimensions of X rather than the sample sizes. A detailed comparison: VAEs aim to represent the original feature space in a lower-dimensional space, whereas our goal is to add mined features to enhance prediction power for downstream tasks. VAEs use probabilistic approaches to describe data distribution with latent variables, but their mappings can be difficult to interpret, while our framework offers better interpretability. Section 2 contains more detail comparison with other methods. ---- **We sincerely thank you for your feedback and comments, and hope this response addresses your question. We are looking forward to further discussion. If our responses have addressed your concerns, we kindly request a reconsideration of the rating score. Thank you again for your valuable input!** ---- [1] Green KM, etc al. Does heavy adolescent marijuana use lead to criminal involvement in adulthood? _Drug Alcohol Depend_. doi:10.1016/j.drugalcdep.2010.05.018 --- Rebuttal 2: Comment: **Thank you for your comments and valuable suggestions, here are our response to your suggestions:** --- - **Suggestion1: Add ablations exploring generalizability with x% features removed.** - **Response:** Thank you for your valuable suggestion regarding the inclusion of ablation studies. We recognize this as an important aspect of framework evaluation and have thus conducted additional experiments to gauge the robustness and dependence of our model on specific features. We systematically removed different proportions of features and reran experiments on the risk level prediction task. - **Experiment 1: Removing Features Mentioned in the Provided Guidelines** As discussed in the paper, we use domain knowledge and expert input to provide guiding principles for the LLMs' inference process. For this experiment, we removed two features explicitly mentioned in the guiding principles—age and marriage status—and observed the following impacts on model performance: | Model | ROC_AUC score | |:------------------:|:-------------:| | MLP | 50% | | GBT | 43% | | GPT3.5 Baseline | 56% | | GPT3.5 Full-feature | 75% | The results demonstrate that excluding features specified in the guiding principles (e.g., GPT3.5 Baseline) reduces the final accuracy but still outperforms traditional machine learning models such as MLP and Gradient Boosting Trees (GBT). - **Experiment 2: Removing Features Not Included in the Provided Guidelines** In this second experiment, we removed the feature “referral source”, which was not explicitly recommended to be used in the guiding principles: | Model | ROC_AUC score | |:------------------:|:-------------:| | MLP | 54% | | GBT | 58% | | GPT3.5 Baseline | 71% | | GPT3.5 Full-feature | 75% | These results indicate that the exclusion of features not included in the guiding principles (e.g., GPT3.5 Baseline) only slightly reduces final accuracy, yet continues to outperform the machine learning baselines (MLP and GBT). --- - **Suggestion 2:** Release code for more comprehensive evaluation - **Response:** We planed to release the code after the review. Following your suggestion, we are providing access to the code for the MIMIC experiments as a preview. Following the rebuttal instruction, we have submitted an anonymous link to the code to the AC. This link will become available for preview upon approval by the AC. ---- **Thank you again for your valuable input! We are looking forward to further discussion.** --- Rebuttal Comment 2.1: Comment: The additional evaluations are more encouraging, especially the ablations with Removing Features Mentioned in the Provided Guidelines as well as the Removing Features Not Included in the Provided Guidelines. However, I still think it is quite difficult to validate the quality of the model even with the MIMIC dataset, as the interpretability results are still quite difficult to evaluate in its current state. Even if human-in-the-loop evaluations are used on the generated examples, there is no guarantee of the trend holding for different datasets, especially those out of distribution (e.g. LLM's reasoning on highly specific domains such as chemistry remains quite nonsensical). I am not entirely convinced by the following responses to Reviewer U9i4's first or second points. From personal experience, "logical reasoning leads to correct predictions, and incorrect reasoning does not produce correct results" seems to slightly contradict the results found in [1], where COT explanations can be plausible yet misleading. Also, any LLM that "extrapolates information not explicitly present in the original data" could also suffer from hallucinations, which is an open problem. I'd like to thank the authors for putting in the time to answer my responses, but I still think it's difficult for me personally to accept the paper. [1] Turpin, M., Michael, J., Perez, E., & Bowman, S. (2024). Language models don't always say what they think: unfaithful explanations in chain-of-thought prompting. Advances in Neural Information Processing Systems, 36. --- Reply to Comment 2.1.1: Comment: **Thank you for taking the time to review our work and for replying our responses.** We appreciate the opportunity to clarify and expand on our findings. Here are our response to your concerns **point-by-point**: --- ***Concern on generalizability:*** > "It is quite difficult to validate the quality of the model even with > the MIMIC dataset. Even if human-in-the-loop evaluations are used on > the generated examples, there is no guarantee of the trend holding for > different datasets.” As discussed in Sections 1 and 4 of our paper, our framework is designed to **enhances Machine Learning model by externalizing expert knowledge to generate new features**. Our experiments with the MIMIC dataset demonstrate that, with sufficient expert input, **our framework generates strong features that significantly enhance predictive accuracy**. The success observed in the two tested domains indicates that our framework is **highly flexible and can be effectively transferred to other domains**.It is important to note that adapting it to new areas requires careful integration of expert knowledge as a crucial first step to ensure that the framework maintains its robustness and accuracy across different applications --- ***Concern on the robustness of CoT :*** > "From personal experience, 'logical reasoning leads to correct > predictions, and incorrect reasoning does not produce correct results > seems to slightly contradict the results found in. COT > explanations can be plausible yet misleading.“ **Thank you for highlighting this concern.** We carefully reviewed the paper you cited. Their experiments are conducted in **zero-shot and few-shot settings**. We agree that CoT can be plausible generate misleading or biased output in few shot setting. It’s important to note that our framework trains LLMs to better align with human knowledge. Fine-tuning, as an important component of our approach, significantly increase the quality of generated CoT, and enhances the reasoning capabilities of LLMs. As indicated in Section 6 (line 370) of our paper, our ablation study on the **fine-tuning process shows that fine-tuned LLMs substantially outperform those in zero-shot and few-shot scenarios**. Additionally, **CoT has been validated as an effective technique across various domains**. For instance, the CoT strategy has been shown to significantly improve LLMs' performance in document understanding and citation generation[1]. It also helps VLMs mimic multi-hop reasoning in answering SCIENCEQA questions [2]. Additionally, LLMs fine-tuned with CoT exhibit marked improvements in reasoning ability across different datasets [3]. --- ***Concern on the robustness of Hallucination :*** > "Any LLM that extrapolates information not explicitly present in the > original data could also suffer from hallucinations, which is an open > problem.” **We address hallucinations through fine-tuning and post-generation validation, aiming to filter out such inaccuracies**. Moreover, our error analysis of the generated reasoning text revealed that the fine-tuned LLMs consistently used accurate information from the profile without fabricating details (For more details, please refer to our response to Reviewer U9i4). **As you mentioned, hallucination remains an open problem in the field, it falls beyond the immediate scope of our work**. We will leave this issue as a direction for future research. Additionally, figure 3 in the paper presents the fine-tuned LLMs' inference results on risk level prediction task, where **our framework was applied to infer features with known ground truth**. The results show that LLMs is able to high-quality feature generation even when extrapolating to unseen information, and outperform traditional Machine Learning approaches in accurately inferring latent features. This demonstrates the accuracy and effectiveness of our framework, giving us confidence in **its ability to generate high-quality features**. --- **Thanks for engaging the discussion period. We are looking forward any further feedback. If this response address your concerns, we kindly request a reconsideration of our merits. Thank you again for your valuable input!** --- ***References:*** [1] Ji, B., Liu, H., Du, M., & Ng, S.-K. (2024). Chain-of-Thought Improves Text Generation with Citations in Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 18345-18353. https://doi.org/10.1609/aaai.v38i16.29794 [2] Lu, P., Mishra, S., Xia, T., Qiu, L., Chang, K. W., Zhu, S. C., ... & Kalyan, A. (2022). Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering. Advances in Neural Information Processing Systems, 35, 2507-2521. [3] Ho, N., Schmid, L., & Yun, S. Y. (2023, July). Large Language Models Are Reasoning Teachers. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 14852-14882.
Summary: The paper presents a framework that uses LLMs to improve predictive modeling by augmenting observed features with inferred latent features. This approach transforms the latent feature mining task into a text-to-text propositional reasoning task, enabling LLMs to infer unobserved yet crucial factors from available data. The framework is tested through a case study in the criminal justice system, demonstrating improved accuracy in scenarios where collected features are weakly correlated with outcomes. Strengths: 1. It addresses the challenge of limited data availability by leveraging LLMs to infer latent features, to improve predictive modeling. 2. The approach of transforming latent feature mining into a text-to-text propositional reasoning task is interesting. 3. The validation on criminal justice data, shows potential for broader applications. The method's generalizability across different domains with minimal customization is a significant advantage, and the reduced need for extensive human-annotated training data makes it practical and scalable. Weaknesses: 1. The paper does not adequately address how to measure the impact of errors introduced by the LLM-based solution on predicted outcomes, nor does it provide uncertainty estimates. This process surely introduces errors, as with any ML-based solution. How do we measure its effect on predicted outcomes, including uncertainty estimates? I suspect more labeled data would be needed to assess this properly (see Egami et al. @ NeurIPS 2023). 2. It is unclear whether the approach should be viewed as a form of dimensionality reduction (based on existing features) or if it extrapolates information not present in the original data. This ambiguity raises concerns about potential bias amplification. For instance, Figure 1 shows deductions made by the model that are not clearly supported by the evidence, suggesting that the method might be amplifying existing biases rather than mitigating them. 3. Connecting to the previous point, the method's ability to learn latent information that is causally predictive of the outcome, as opposed to relying on spurious correlations, remains uncertain. Conducting an out-of-distribution test, where the characteristics of individuals differ from the training data, would be crucial in evaluating the model's generalizability and causal inference capabilities. 4. The rationale behind not using all available data directly in the LLM for prediction is not well-justified. Directly prompting the LLM with the full data might provide more accurate predictions without the need for dimensionality reduction. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How do you measure the impact of errors introduced by the LLM-based solution on predicted outcomes, including uncertainty estimates? 2. Should the proposed method be viewed as a form of dimensionality reduction, or does it extrapolate information not present in the original data? How do you ensure that this process does not amplify existing biases? 3. How do you ensure that the method learns latent information that is causally predictive of the outcome and does not simply rely on spurious correlations? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have acknowledged limitations of their work, particularly in addressing the ethical concerns associated with data collection and the need for privacy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for your valuable feedbacks!** - **Question 1: How to measure the impact of errors? How to ensure the approach doesn't amplify potential errors?** - **Response:** As discussed in Section 4, we incorporate human-in-the-loop interventions to identify and remove erroneous synthetic reasoning steps from the training data. Additionally, we utilize automatic evaluation metrics to detect errors within the reasoning process and excluded from the training dataset to maintain model integrity. Furthermore, we conducted an additional error analysis for the Risk Level Prediction (Task 1 in the main paper) to evaluate the impact of errors. We recruit human volunteers examine all 1168 instances used in Step 2 (generating synthetic rationales) of the framework. Our analysis found that logical reasoning leads to correct predictions, and incorrect reasoning does not produce correct results, thereby minimizing error amplification. This analysis supports the effectiveness of our framework in avoiding error amplification. - **Question 2: Clarification on the difference with dimension reduction.** - **Response:** Our method extrapolates information not explicitly present in the original data, unlike dimension reduction techniques like VAE or PCA, which represent the original feature space in lower dimensions while retaining as much information as possible. Instead, our framework enriches the dataset by adding mined features to improve prediction power for downstream tasks, mimicking human experts' reasoning by considering features holistically and inferring additional socio-economic information not recorded in the data. Figure 1 in our paper illustrates this step-by-step extrapolation process using LLMs. - **Question 3: Suggestion on adding experiments on other domains/dataset to prove generalizability.** - **Response:** We have conducted additional experiments in the healthcare domain with the MIMIC-IV dataset to prove the generalizability of our framework. Here are the experiment details and results: - **Dataset:** The MIMIC dataset is a comprehensive dataset containing detailed de-identified patient clinical data and is widely used for various prediction tasks in the machine learning literature. - **Task Description:** The discharge location prediction task uses patient data to forecast the most likely discharge destination, aiding hospital management in preparing for discharges. Our latent feature framework creates features that enhance machine learning models for this task. Notably, we introduce a "social support" feature, which captures the healthcare, familial, and community support available to the patient. This involves repeating the four-step process of our framework: - ***Step 1.*** Create rationale: we leverage domain expertise in hospital inpatient management and patient flow to help us craft rationales to infer social support. - ***Step 2.*** Enlarge synthetic data for LLM training: similar to our approach for the outcome prediction task using the criminal justice data (task 2 mentioned in the main paper), we prompt GPT-4 to generate 4500 data points to fine-tune GPT-4o-mini. - ***Step 3 & 4.*** The remaining two steps are the same as those used for the outcome prediction task. - **Experiment Setting:** Due to the lack the ground-truth label for the latent variable "social support," making our experiments akin to Task 2 in the main paper. After generating the latent features, we train four machine learning classifiers: Logistic Regression (LR), Multilayer Perceptron (MLP), Gradient Boosting Trees (GBT), and Random Forest (RF). Training is conducted with and without latent features for comparison. Each experiment is run five times with different random seeds, and the results are averaged to ensure reliability.. - **Experiment Results:** The table below demonstrates the experiment results, showing an average improvement of approximately 8.64% in accuracy and 8.64% in F1 score when latent features are added to the models. This is similar to the percentage increase reported in Table 4A in the main paper. | Model | Accuracy (std.) | F1_score (std.) | |:----------:|:-----------------:|:-----------------:| | LR | 65.22% (0.01) | 65.46% (0.01) | | MLP | 63.19% (0.02) | 63.19% (0.02) | | GBT | 64.84% (0.01) | 65.09% (0.01) | | RF | 65.11% (0.01) | 65.44% (0.01) | | LR w/ LF | **71.22% (0.01)** | **71.26% (0.01)** | | MLP w/ LF | **74.40% (0.01)** | **74.50% (0.01)** | | GBT w/ LF | **75.56% (0.02)** | **75.38% (0.02)** | | RF w/ LF | **75.31% (0.01)** | **75.22% (0.01)** | This experiment on a different dataset from a different domain shows the effectiveness and generalizability of our framework. Connecting to Lemma 1, our experiment results show that the added features are informative – likely because the human experts (case managers or physicians) are making decisions based on more than the explicitly recorded features (X) in the dataset. - **Question 4: How to ensure that the method learns latent information that is causally predictive of the outcome ?** - **Response:** We acknowledge that our current framework does not explicitly perform causal inference. Instead, relationships between features and outcomes are identified through domain knowledge and expert input, leveraging expert understanding to guide the identification of meaningful latent features. We agree on the importance of causally relevant latent features and recognize this as a crucial direction for future research. However, incorporating causal inference into our framework is beyond the scope of this paper and would require a dedicated study to thoroughly examine and address this issue. ---- **We sincerely thank you for your feedback and comments, and hope this response addresses your question. We look forward to further feedbacks. If our responses have addressed your concerns, we kindly request a reconsideration of the rating score. Thank you again for your valuable input!** --- Rebuttal Comment 1.1: Comment: thank you for thoughtfully addressing my questions, and especially for adding an experiment. i still think that a more careful design w.r.t to causality and potential errors would make this paper substantially more useful.
Summary: This paper used large language models to infer latent variables that are important for downstream prediction tasks to augment the existing models. In particular, the author demonstrated the use of the proposal on a criminal justice system use case, in which the LLM-mined-latent features significantly boost the prediction performance. Overall the paper presents an interesting question and how LLM could help mining for latent features, but I have several questions regarding 1. the generalizability of the proposed method; 2. Appropriate combinations/baseline methods; and 3. The potential ethical implications of this method. I will detail these points in the strength/weakness sections below. Strengths: 1. On the outcome prediction results, using the latent feature seems to have boosted the performance by 7-10%, a big margin. 2. The proposed framework brings a level of formalism to the current crowded LLM for (social) science applications/work, including the text-to-text proposition work. Weaknesses: 1. Despite the general initial framing in Section 3, it was not very clear how generalizable results from Sections 4-6 are — this includes not only the COT and the prompts used, but more critically the selection of what kind of latent features we are including. 2. The current work does not seem to go into depth about what kind of latent features are LLM particularly good at constructing and which ones are particularly “bad” (eg subject to the most bias and systematic over-or-under-prediction). I think this is particularly relevant for social science applications where many of the categories and features are more of a “construct” and often qualitative in nature. 3. Have the authors compared the results by using text embeddings of the descriptions as an input feature? It’s interesting that the fine-tuning strategy is necessary for good performance, which seems to suggest that learning the intermediate classification rule is important since direct manipulation of natural language yields less impressive results. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. In addition to the questions listed in the weakness section, I would encourage the authors to engage much more critically with the limitations of these approaches, esp. when high-risk predictions in CJS are very high-stake. 2. I think this paper would also be much stronger if the proposed framework of mining for latent features (which requires quite extensive human rationale collection) works for another prediction problem (either different data and/or different tasks). Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Mentioned above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for your comments and feedbacks. Here are our responses to your questions:** --- **Question 1: Suggestion on adding experiments on other domains/dataset to prove generalizability.** **Response:** We have conducted additional experiments in the healthcare domain with the MIMIC-IV dataset to prove the generalizability of our framework. Here are the experiment details and results: - **Dataset:** The MIMIC (Medical Information Mart for Intensive Care) dataset [2] is a comprehensive dataset containing detailed de-identified patient clinical data and is widely used for various prediction tasks in the machine learning literature. - **Task Description:** The discharge location prediction task uses patient data to predict the most likely discharge destination, aiding hospital management in preparing for discharges. We apply our latent feature framework to create features that enhance machine learning models for this task. Specifically, we introduce a "social support" feature, capturing healthcare, familial, and community support available to the patient. We repeat the four-step process of our framework: - ***Step 1.*** Create rationale: we leverage domain expertise in hospital inpatient management and patient flow to help us craft rationales to infer social support. - ***Step 2.*** Enlarge synthetic data for LLM training: similar to our approach for the outcome prediction task using the criminal justice data (task 2 mentioned in the main paper), we use a self-instructing approach to prompt GPT-4 to generate 4500 data points to fine-tune GPT-4o-mini. - ***Step 3 & 4.*** The remaining two steps are the same as those used for the outcome prediction task (task 2 mentioned in the main paper). - **Experiment Setting:** Note that we do not have the ground-truth label for the latent variable "social support," making our experiments similar to Task 2 in the main paper. After generating the latent features, we train four machine learning classifiers: Logistic Regression (LR), Multilayer Perceptron (MLP), Gradient Boosting Trees (GBT), and Random Forest (RF). We conduct training with and without latent features for comparison. The dataset is split 70/30 for training and testing. Each experiment is run five times using five different random seeds, averaging the results to ensure reliability. - **Experiment Results:** The table below demonstrates the experiment results, showing an average improvement of approximately 8.64% in accuracy and 8.64% in F1 score when latent features are added to the models. This is similar to the percentage increase reported in Table 4A in the main paper. | Model | Accuracy (std.) | F1_score (std.) | |:----------:|:-----------------:|:-----------------:| | LR | 65.22% (0.01) | 65.46% (0.01) | | MLP | 63.19% (0.02) | 63.19% (0.02) | | GBT | 64.84% (0.01) | 65.09% (0.01) | | RF | 65.11% (0.01) | 65.44% (0.01) | | LR w/ LF | **71.22% (0.01)** | **71.26% (0.01)** | | MLP w/ LF | **74.40% (0.01)** | **74.50% (0.01)** | | GBT w/ LF | **75.56% (0.02)** | **75.38% (0.02)** | | RF w/ LF | **75.31% (0.01)** | **75.22% (0.01)** | The results demonstrate **another strong evidence of using our framework to improve downstream prediction power with the addition of latent features**. This experiment on a different dataset from a different domain shows the effectiveness and generalizability of our framework. Connecting to Lemma 1, our experiment results show that the added features are informative – likely because the human experts (case managers or physicians) are making decisions based on more than the explicitly recorded features (X) in the dataset. --- **Question 2: Suggestion on using text embedding to show the effectiveness of learning** **Response:** - Thank you for this interesting suggestion. From our interpretation of your comment, we assume that you are asking if we have compared our results with an alternative approach where text embeddings (numerical representations of text) of the descriptions are used as input features directly. We make two clarifications. First, the given input features X from our datasets are either continuous (numeric) or categorical (discrete) numbers. There is no natural text embedding directly available from the data. Thus, we use the Chain of Thoughts (CoT) prompting as a critical component of our framework, to provide the interpretable text input after processing the numeric features X. Hence, there is no direct alternative to compare if you were thinking of replace the input features with text embeddings. Second, integrating text embeddings after profile writing is an idea worth exploring. We acknowledge that it could be an interesting alternative and warrants further exploration. Meanwhile, we note that the replacement of natural language to text embeddings as input could obscure the step-by-step logical flow that is essential for the transparency and interpretability of the CoT process. Nevertheless, though we have not specifically compared this approach in our current study, we recognize that it could provide valuable insights. Future work could explore integrating text embeddings alongside our fine-tuning strategy to potentially combine the strengths of both methods. Our conjecture is that fine-tuning strategy is necessary for achieving good performance. This aligns with your suggestion that the process of learning an intermediate classification rule during fine-tuning is crucial for the model’s success. ---- **Thank you for your insightful comments and questions. We hope our response has been helpful in addressing your concerns. We are looking forward to further discussion.** ---- [1] Johnson, A. E. W., Pollard, T. J., Shen, L., Li-Wei, H. L., Feng, M., Ghassemi, M., ... & Mark, R. G. (2016). MIMIC-III, a freely accessible critical care database. Scientific Data, 3, 160035. --- Rebuttal 2: Comment: **Thank you for thoughtful comment! Following your suggestion, we plan to consider adding following discussions into the main paper.** --- - **Discussion 1: What are limitations of the framework when high-risk predictions in CJS are very high-stakes?** - **Response:** - Thank you for this insightful question. Below we list more detailed discussion on the limitations of the framework for high-risk predictions in the criminal justice system (CJS). We will incorporate this discussion into the main paper during our revision process. - **Data Quality and Dependency:** The performance and reliability of the framework depend on the quality of both the training data and the input data used during operation. Inconsistencies, errors, or gaps in data can lead to inaccurate predictions, which is critical when these predictions aim to influence decisions about individuals’ freedoms in the CJS. We are cautious of such errors and this is why we closely involve human experts during the whole research process. While acknowledging that bias sometimes is inherent in the input data, we emphasize that our framework provides the transparency of how the inference/prediction is made, which can significantly help interpret results and correct for such bias by introducing human-in-the-loop. In other words, our framework allows an iterative fine-tuning process, where we can incorporate feedback loops with domain experts to adjust and correct the model’s reasoning pathways. This iterative method ensures that the latent features inferred, such as the need for substance abuse treatment, are aligned with nuanced real-world outcomes rather than broad statistical correlations. - **Impact on Public Trust:** The use of AI in areas impacting fundamental rights can affect public trust in the justice system. If the public perceives these tools as opaque or biased, it could undermine confidence in judicial processes, which is critical for the effective functioning of any democratic legal system. LLMs provide the interpretability that is useful for gaining public trust and allows an iterative process to refine the prediction results. --- - **Discussion 2: What kind of latent features are your framework particularly good at constructing ?** - **Response:** Thank you for such an insightful question. Our framework excels at constructing latent features from observable and structured data with clear relationships, such as socio-economic status indicators and risk levels. These features benefit from well-defined guidelines and domain knowledge, allowing our framework to infer them with higher accuracy and reliability by mimicking the human reasoning process. The framework can be less effective with subjective latent features or those with subtle relationships, like personal behavioral tendencies. These complexities can introduce bias and lead to overfitting and systematic errors. To mitigate these issues, we use domain expertise and a human-in-the-loop approach to refine latent feature construction. Despite these measures, some biases may persist, and addressing them remains a focus for future research. We will continue to explore improvements. ---- **Thank you again for your valuable input! We are looking forward to further discussion.**
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Theoretical Investigations and Practical Enhancements on Tail Task Risk Minimization in Meta Learning
Accept (poster)
Summary: This work investigates the tail risk minimization in meta-learning from theoretical and practical perspectives. Overall, this work is well-written, novel and theoretically enriches TR-MAML[1]/DR-MAML[2]. In the realm of large models, meta-learning plays a crucial role due to the pressing concern of distributional robustness across various tasks, particularly in risk-sensitive applications. Here, I express my positive attitude based on this work's completeness, novelty, and workload. [1] Collins, L., Mokhtari, A., & Shakkottai, S. (2020). Task-robust model-agnostic meta-learning. Advances in Neural Information Processing Systems, 33, 18860-18871. [2] Wang, Q., Lv, Y., Xie, Z., & Huang, J. (2024). A simple yet effective strategy to robustify the meta learning paradigm. Advances in Neural Information Processing Systems, 36. Strengths: The novelty of this work lies in two aspects. (1) This work reduces the distributionally robust meta-learning to a max-min optimization and performs analysis on tail risk generalization, convergence rate estimation, and the role of quantile estimation; (2) This work enhances DR-MAML through more accurate quantile estimates with theoretical support. Overall, this work clarifies its contribution in Table3 and completes all claims together with detailed proofs. Extensive experiments further verify theoretical insights. Weaknesses: No concrete weakness. Technical Quality: 4 Clarity: 4 Questions for Authors: With complete proofs and empirical evaluations, I do not have too many concrete questions except for some discussions. (1) Both group DRO and tail risk minimization handle the distribution shift in robust optimization; what are the advantages or disadvantages between them in meta-learning? (2) What difficulties will we encounter if we adopt the developed strategy in the optimization of large models? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank **# Reviewer 8vBx** for these helpful comments. The remainders focus on questions to answer. --- **1. Advantages and disadvantages between group DRO and tail risk minimization in meta-learning** Thanks for this comment. The group DRO method employs risk reweighted algorithm to relax the weights of tasks and assign more weights to the gradient of worst cases. The tail risk minimization principle adopts two-stage optimization strategies to control the worst fast adaptation cases at a certain probabilistic level. (1) Group DRO is advantageous when predefined groups are available to guarantee robust performance across these groups, however, meta training is in a task episodic manner and weakens the applicability of group DRO. Tail risk minimization is more flexible, does not require clearly defined groups, and is suitable for risk-sensitive scenarios. (2) In the experiments, the tail risk minimization method consistently outperforms group DRO, demonstrating the advantages of the two-stage optimization strategy in improving robustness. **2. Application to large models** Thanks for this constructive comment. To answer the scalability question in large NNs, e.g., large models, we directly run experiments with large models. **Experimental setup:** CLIP [1] is a recent large vision-language model; hence, we employ "ViT-B/16"-based CLIP as the backbone to enable few-shot learning in the same way as MaPLe (N_CTX: 2, MAX_EPOCH: 30) [2], scaling to large NNs in evaluation. SGD is the default optimizer with LR 0.0035, and A6000 GPUs work for computations. We examine tail task risk minimization effectiveness on three large datasets. The class number split setup in datasets (num train/num val/num test) is TieredImageNet (351/97/160), ImagenetA [3] (128/32/40), and ImagenetSketch (640/160/200). **Result and analysis:** We'll include the **global rebuttal Figure15-17** and analysis below in the manuscript: >Results are reported in **Fig. 15-17**. DR-MaPLe and DR-MaPLe+ consistently outperform baselines across both average and ${\rm CVaR}_\alpha$ indicators in $\texttt{5-way 1-shot}$ cases, demonstrating the advantage of the two-stage strategy in enhancing the robustness of few-shot learning. DR-MaPLe+ achieves better results as KDE quantiles are more accurate with large batch sizes. These results examine the scalability and compatibility of our method on large models. **3. Opensource plan** Even though this work contributes to more theoretical points, we hope our empirical investigations can provide more insights in developing large model augmented few-shot learners. ***We'll opensource codes of large models' experiments to facilitate robust fast adaptation research from the updated manuscript.*** ___ **Reference** [1] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." ICML 2021. [2] Khattak, Muhammad Uzair, et al. "Maple: Multi-modal prompt learning." CVPR 2023. [3] Hendrycks, Dan, et al. "Natural adversarial examples." CVPR 2021. --- *Finally, we hope your questions are well answered. And thanks for your suggestions in improving the quality of this manuscript.* --- --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal and update the review Comment: After reading the rebuttal and other reviewers' comments, I add further reviews. (1) Overall, this is a little niche but comprehensive theoretical paper about the tail risk minimization for meta learning. (2) The extra effort made by the authors is impressive in the era of large models. These new results on tail risk minimization for MaPLe are inspiring and well answer my lasting question about the role of CLIP like models in few-shot prediction. I am happy to see the benefits of tail risk minimization in large-model augmented meta-learning and looking forward to the open-sourced code. For the above, I thank the authors for the rebuttal and tend to raise my score as the bonus. --- Reply to Comment 1.1.1: Comment: Thank you for your helpful suggestions and kindness. Your comments help improve the manuscript a lot.
Summary: This paper proposes an enhancement to the previous work termed DR-MAML by reformulating it as a Stackelberg game. Theoretical investigations regarding its solution concept, convergence rate, and generalization bound are provided. Numerical experiments demonstrate the improved robustness of the proposed method. Strengths: 1. Both theoretical guarantees and numerical validations are provided to justify the sought robustness. 2. Reformulating DR-MAML's objective as a Stackelberg game enables analyzing the problem from the viewpoint of game theory. Weaknesses: 1. The contribution of this work is incremental and exclusive to DR-MAML, which limits its scope and broader applicability. 2. The writing is hard to parse, and several notions are pretty vague. For instance, in lines 95-96, $F_{\ell}^{\alpha}$, $\Omega_{\alpha,\tau}$, and $p_{\alpha}$ are defined through illustration in a figure instead of mathematical expressions. 3. Experiments are limited and lack comparison to SOTA methods. It is recommended comparing DR-MAML+ with popular meta-learning methods (such as MetaCurvature and MetaOptNet) on open-source benchmark datasets including tieredImageNet, CUB-200-2011, and MetaDatasets. In addition, the improvement in Table 1 and Table 2 are rather marginal (no more than 0.6%). 4. There is no comparison of time and space complexity, and the scalability to large NNs like ResNet is unknown. 5. The last statement of Theorem 4.1 is imprecise. According to Eq. (22), this convergence rate solely holds true when $t$ approaches infinity. 6. The paper contains numerous typos. For instance, in line 4, the citation is not compiled. In line 95, "the resulling" should be corrected to "resulting." Additionally, in lines 96, 130, 240, 241, 244, 246, 254, 265, 283, 295, 303, 319, 328, 332, and 335-337, the cross-references are in the equation style instead of the correct format. Technical Quality: 3 Clarity: 3 Questions for Authors: See above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No explicit discussion on limitations are provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank **# Reviewer 6Ej4** for these insightful comments. The remainder mainly focuses on concerns to address. --- **1. Application scopes and contribution clarifications** Thanks for the comment. Sorry for confusing you in contributions, and we further summarize: (1) Regarding the application scopes, ***this work is meta-learning method agnostic***. Apart from MAML, CNP is also used in examinations. Besides, we've taken the advice to conduct experiments on more benchmarks with large models' augmented backbone. See **Point 3**. (2) Regarding the contribution, this work leans more on theoretical investigations of tail task risk minimization for meta-learning, which complements the empirical discoveries in Wang et al. (2023a). These include (i) the notion of solutions, (ii) understanding the Stackelberg game, (iii) generalization and asymptotic analysis in tail adaptation risk, and implementation tricks to enhance robustness. The practical enhancement is the theory's side product. See details in **Lines 35-39**, **Lines 602-604**, and **Table 3**. **2. More descriptions on notations** Thank you for your suggestions. **We'll add more to Line 96 as follows**: >The normalized cumulative distribution $F_{\ell}^{\alpha}(l;\theta)$ is defined as: $$F\_{\ell}^{\alpha}(l;\theta)= \begin{cases} 0, & l<\text{VaR}\_{\alpha}[\ell(\mathcal{T},\theta)]\ or \ \frac{F\_{\ell}(l;\theta)-\alpha}{1-\alpha}, & l\geq\text{VaR}\_{\alpha}[\ell(\mathcal{T},\theta)]. \end{cases}$$ > $\forall\theta\in\Theta$, the meta learning operator $\mathcal{M}\_{\theta}$ defines:$\mathcal{M}\_{\theta}:\tau\mapsto\ell(\mathfrak{D}\_{\tau}^{Q},\mathfrak{D}\_{\tau}^{S};\theta).$ Accordingly, the tail risk task subspace $\Omega\_{\alpha,\tau}:=\bigcup\_{\ell\geq\text{VaR}\_{\alpha}[\ell(\mathcal{T},\theta)]}\left[\mathcal{M}\_{\theta}^{-1}(\ell)\right]$, with the task distribution constrained in $\Omega\_{\alpha,\tau}$ by $p_{\alpha}(\tau;\theta)$. **3. Additional experiments and larger NNs** Thanks for these precious suggestions. (1)  Comparison with MetaCurvature/MetaOptNet and more evaluation in larger NNs: Both [1] and our work are agnostic to meta-learning methods, with baseline selections specifically related to distributional robustness in meta-learning. Instead, we directly run experiments with large models. CLIP [2] is a recent SOTA than MetaCurvature/MetaOptNet; hence, we employ "ViT-B/16"-based CLIP as the backbone to enable few-shot learning in the same way as MaPLe (N_CTX: 2, MAX_EPOCH: 30) [3], scaling to large NNs in evaluation. SGD is the default optimizer with LR 0.0035, and A6000 GPUs work for computations. We examine tail task risk minimization effectiveness on three large datasets. The class number split setup in datasets (num train/num val/num test) is TieredImageNet (351/97/160), ImagenetA [4] (128/32/40), and ImagenetSketch (640/160/200). We'll include the **global rebuttal Figure15-17** and analysis below in the manuscript: >Results are reported in **Fig. 15-17**. DR-MaPLe and DR-MaPLe+ consistently outperform baselines across both average and ${\rm CVaR}_\alpha$ indicators in $\texttt{5-way 1-shot}$ cases, demonstrating the advantage of the two-stage strategy in enhancing the robustness of few-shot learning. DR-MaPLe+ achieves better results as KDE quantiles are more accurate with large batch sizes. These results examine the scalability and compatibility of our method on large models. ***We'll opensource codes to facilitate robust fast adaptation research from the updated paper.*** (2) We've updated related work in **Section2** **Lines 61-63**: >Widely known are model agnostic meta learning and related variants, such as MetaCurvature, which learns curvature information and transforms gradients in the inner-loop optimization. >The metrics-based methods ... For example, MetaOptNet proposes to learn embeddings under a linear classifier and achieve SOTA few-shot classification performance. **Reference** [1] Wang, Qi, et al. "A simple yet effective strategy to robustify the meta learning paradigm." NeurIPS 2023. [2] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." ICML 2021. [3] Khattak, Muhammad Uzair, et al. "Maple: Multi-modal prompt learning." CVPR 2023. [4] Hendrycks, Dan, et al. "Natural adversarial examples." CVPR 2021. **4. More analysis and rephrase** (1) Marginal improvement in **Table 1** and **Table 2**: Image processing necessitates a small batch size, causing smaller differences. See analysis in **Lines 305-307**: >... we attribute this to the small batch size in training, which weakens KDE's quantile approximation advantage. (2) Time and space complexity comparison: Sorry for missing this part. **We'll add the following to the manuscript:** >The space complexity is specific to the meta-learning method, while this work is agnostic to it. Hence, we report the computational complexity for the DR-MAML+ as $\mathcal{O}\big(\mathcal{B}\big(\mathcal{B} - \alpha|\mathcal{M}| \big)\big)$ while using KDE with the Gaussian kernel, and that of DR-MAML is $\mathcal{O}\big(\mathcal{B}\big(\log(\mathcal{B}) - \alpha|\mathcal{M}|\big)\big)$. (3) Refine theorem 4.1 condition: You are right, and **we’ll refine it as:** > Let the iteration sequence ... when $t$ approaches infinity. **5. Typos and equation-style reference** Sorry for this, we'll correct the "resullting" typo to "resulting" and modify all cross-reference styles, e.g., Fig/Table/Theorem/Assumption/Example, by removing unnecessary brackets "()", such as change "Fig. (6)" to "Fig. 6", "Theorem (4.2)" to "Theorem 4.2", "Assumption (1)" to "Assumption 1". We hope there is no typo and cross-reference style issues this time. --- *Thanks again for carefully reading our manuscript and proposing constructive suggestions. After clarifying the contribution and including more results, we hope the evaluation of this work can be reconsidered. Your help is precious to us.* --- Rebuttal Comment 1.1: Comment: Thank you for providing the detailed response and additional experiments, which addressed most of my concerns. 1. Regarding the scope, could you please elaborate more on why this paper is "meta-learning method agnostic." The CVaR objective Eq. (3) of this work is specific to DR-MAML [1], and is not applicable to generic meta-learning approaches. Moreover, the theoretical investigations in Section 4 are also tailored to DR-MAML [1]. 2. Could you please also provide the practical implementation time (throughput) and space (GPU memory) comparisons? --- Rebuttal 2: Title: Thanks for feedback Comment: We're happy to see most of the concerns are addressed, and we express sincere gratitude for the helpful feedback and suggestions. The following answers other questions: 1. We apologize for not clearly explaining that this work is agnostic to meta-learning methods. - Eq. (3) is not specifically for DR-MAML but is a general form of the risk function $\ell(\mathfrak{D}\_{\tau}^{Q},\mathfrak{D}\_{\tau}^{S};\theta)$ used in typical meta-learning methods. **Eq. (4)** is an example of Eq. (3) and is specific to DR-MAML. In this case, the risk function becomes $\ell(\mathfrak{D}\_{\tau}^{Q};\theta-\lambda\nabla\_{\theta}\ell(\mathfrak{D}\_{\tau}^{S};\theta))$, implying the implementation of MAML. **Algorithm 2 on Lines 661-662** shows the application of another method CNP, where the loss function is specifically $\ell(\mathfrak{D}\_{{\tau}}^{Q};z,\theta)$, with $z=h\_{\theta\_1}(\mathfrak{D}\_{\tau}^{S})$. - The theoretical insights in Section 4 are considered in a general form of meta learning with risk function $\ell(\mathfrak{D}\_{\tau}^{Q},\mathfrak{D}\_{\tau}^{S};\theta)$, not limited to MAML cases. Besides, we take MAML as a specific example for conducting theoretical analysis in Appendix Theorem C.1 on Lines 779-782. 2. The practical implementation time/space comparisons. We keep the setup the same as that in [1]: use the same maximum number of meta gradient updates for all baselines in training processes (this means given $\alpha=0.5$, the tail risk minimization principle requires double task batches to evaluate and screen sub-batches). For practical implementation, we take vanilla MaPLe, DR-MaPLe (MC augmented tail risk minimization), and DR-MaPLe+ (KDE augmented tail risk minimization) in tieredImageNet few-shot classification as the example to report. The Table below reports the overall training time and memory (the vanilla MaPLe serves as the anchor point, and + means additional costs from the two-stage operation). | - | MaPLe | DR-MaPLe | DR-MaPLe + | |--------------------------|------|---------|----------| | Implementation time | 2.1h | +1.7h | +1.7h | | Memory | 41.57G | +36.84G | +36.84G | Despite the more complex quantile estimations, DR-MaPLe+ does not show significantly higher time and space consumption compared to DR-MaPLe. It can be seen that both DR-MaPLe and DR-MaPLe+ consume more memories, and the extra training time over MaPLe arises from the evaluation and sub-batch screening in the forward pass. Such additional computations and memory costs bring significant robustness improvement, which can be crucial in risk-sensitive fast adaptation. We'll also include this point as a potential limitation in the Appendix. [1] Wang, Qi, et al. "A simple yet effective strategy to robustify the meta learning paradigm." NeurIPS 2023. --- *Feel free to let us know if these questions have been answered well, and we’re happy to engage in further discussions. We’ll incorporate these precious suggestions into the manuscript. It would be appreciated if you could update the scores after the concerns are addressed. Your help means a lot to us.* --- Rebuttal Comment 2.1: Comment: Thank you for the elaborations and further experiments. I have no other concerns and I will update my rating. --- Reply to Comment 2.1.1: Title: Thanks for the update Comment: Your suggestions help improve our manuscript a lot, and we express gratitude for your support.
Summary: The paper provides theoretical investigations for better understanding of an existing method in literature that focuses on minimizing expected tail risk. Equivalence of the algorithm is shown to a Stackerlberg game, which allows to study its convergence rate and asymptotic bounds on performance and generalization are provided. Finally, a practical heuristic is suggested to obtain performance improvements over the base method. Strengths: - The presentation and structure of the paper is good. - The theoretical results, while limited in mathematical novelty, seem comprehensive and are clear to follow. - Sufficient experiments and ablations are provided. Weaknesses: Clarity: - It is not clear how the expression for the expected tail risk minimization is obtained in (3); the reviewer had to refer to the corresponding reference to understand it and thus, including it would help make the paper self-contained. - Figure 1 is not clear; the font size needs to be increased. Technical Quality: 3 Clarity: 2 Questions for Authors: - What is the overhead induced by searching for the appropriate value of hyperparameter $\alpha$, compared to other baselines? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank **# Reviewer ci3Y** for these insightful comments. The remainders focus on concerns and questions to address. --- **1. Additional explanations on the expected tail risk minimization** Thanks for this advice. **We'll include more explanations in Line 115** as follows: > Wang et al. [1] minimizes the expected tail risk, or equivalently $\text{CVaR}\_{\alpha}$ risk measure: $$ \min\_{\theta\in\Theta}\mathcal{E}\_{\alpha}(\theta):=\mathbb{E}\_{p_{\alpha}(\tau;\theta)} \Big[\ell(\mathfrak{D}\_{\tau}^{Q},\mathfrak{D}\_{\tau}^{S};\theta) \Big]. $$ >Due to no closed form of $p\_{\alpha}(\tau;\theta)$, [1] introduces a slack variable $\xi\in\mathbb{R}$ and reformulates the objective as follows: $$ \min_{\theta\in\Theta,\xi\in\mathbb{R}} \mathcal{E}\_{\alpha}(\theta,\xi):= \frac{1}{1-\alpha}\int\_{\alpha}^{1}v\_{\beta}d\beta = \xi+\frac{1}{1-\alpha}\mathbb{E}\_{p(\tau)} \left[\left[\ell(\mathfrak{D}\_{\tau}^{Q},\mathfrak{D}\_{\tau}^{S};\theta) -\xi\right]^{+} \right], $$ >where $v\_\beta:=F\_{\ell}^{-1}(\beta)$ denotes the quantile statistics and $\left[\ell(\mathfrak{D}\_{\tau}^{Q},\mathfrak{D}\_{\tau}^{S};\theta)-\xi\right]^{+}:=\max$ {$\ell(\mathfrak{D}\_{\tau}^{Q},\mathfrak{D}\_{\tau}^{S};\theta)-\xi,0$} is the hinge risk. We hope such descriptions make it easier to understand. **2. Larger font size in figure 1** Thanks for the valuable feedback. We've increased the font size in Figure 1 and will update it in the final manuscript. **3. Questions about hyperparameter $\alpha$** Thanks for your insightful question. (1) In our experiments, we did not perform extensive hyperparameter $\alpha$ searches. Instead, we adopted hyperparameter settings that are consistent with existing literature [1]. This ensures that our results are fair in comparison to previous studies without introducing additional overhead from hyperparameter tuning. (2) To further reveal the impact of confidence levels $\alpha$ on model performance, we conducted ablation experiments, as shown in **Fig. 12/13**. We can observe that in both sinusoid 5-shot and 10-shot tasks, as the confidence level varies, DR-MAML+ exhibits more stable performance than DR-MAML, indicating that DR-MAML+ has a lower sensitivity to confidence levels. --- **Reference** [1] Wang, Qi, et al. "A simple yet effective strategy to robustify the meta learning paradigm." NeurIPS 2023. --- *Finally, we hope these questions and concerns are well answered and addressed. Thanks again for your efforts. We are happy to discuss any other questions and provide further clarifications.* --- Rebuttal 2: Title: Acknowledgement of Rebuttal Comment: I thank the authors for responding to the reviewers comments and feedback, and I appreciate the additional evaluation on larger models. However, I am inclined to agree with the other reviewers regarding the limited scope of the work. Additionally, the practical benefits of the proposed method is largely tangential to the main theoretical contribution about the Stackerlberg game equivalence, and arises from improving the risk estimates using more sophisticated density estimation, something that is already noted in the original work it builds from [1]. For these reasons, I keep my score as is. [1] Wang, Qi, et al. "A simple yet effective strategy to robustify the meta learning paradigm." NeurIPS 2023. --- Rebuttal 3: Comment: Thanks for recognizing our primary theoretical contributions. --- You're right; the scope of this work is restricted to a theoretical understanding of tail risk minimization in meta learning. The practical benefits of quantile estimate are indeed inspired by the hypothesis in [1], and we (i) *establish rigorous theoretical analysis in* **Theorem 4.4** and **Remark 1**; (ii) *empirically conduct large-model few-shot learning experiments and verify the advantage of KDE over MC with larger batch size, complementing the theory*. Ref: [1] Wang, Qi, et al. "A simple yet effective strategy to robustify the meta learning paradigm." NeurIPS 2023. --- *Finally, we're encouraged by your positive comments, and your valuable suggestions help improve the manuscript greatly. Many thanks.* Title: Thanks for precious feedback
null
null
Rebuttal 1: Rebuttal: *We sincerely thank all reviewers and area chairs for their work. This global response summarizes reviews, addresses concerns, answers questions, and reports changes in the manuscript.* --- ### **I. Review Summary** We thank all reviewers for their comments: 1. a *well-constructed paper easy to follow* **[# Reviewer ci3Y and # Reviewer 8vBx]**; 2. *comprehensive theoretical analysis aligned with numerical evaluations* **[# Reviewer ci3Y, # Reviewer 6Ej4, and # Reviewer 8vBx]**; 3. *addresses the problem from Stackelberg game/max-min optimization* **[# Reviewer 6Ej4 and # Reviewer 8vBx]**. --- ### **II. Primary Concerns and Questions** **1. Contributions and application scope clarifications [# Reviewer 6Ej4]** (1) Regarding the contribution, this work leans more on theoretical points **[# Reviewer ci3Y and # Reviewer 8vBx]** throughout tail risk minimization for meta-learning, which complements the empirical discoveries in Wang et al. (2023a) [1]. These include (i) notion of solutions, (ii) algorithmic understanding from the Stackelberg game, (iii) generalization, asymptotic analysis in tail adaptation risk, quantile estimates' influence, and implementation tricks to enhance robustness as mentioned in **Lines 35-39** and **Table 3**. The practical enhancement is the side product of the theory. (2) Regarding the application scopes, *this work is meta-learning method agnostic*. Apart from MAML, CNP is also used in examinations. Importantly, we've taken the advice to ***conduct experiments on more benchmarks with large models' augmented backbone***. See **Point 2** and detailed **Response to # Reviewers 6Ej4/8vBx**. **2. Comparison with MetaCurvature/MetaOptNet and more evaluation [# Reviewer 6Ej4], Scalability in large models [# Reviewer 6Ej4 and # Reviewer 8vBx]** Both [1] and our work are agnostic to meta-learning methods, with baseline selections specifically related to distributional robustness in meta-learning. Instead, we directly run experiments with large models. CLIP [2] is a recent SOTA than MetaCurvature/MetaOptNet; hence, we employ "ViT-B/16"-based CLIP as the backbone to enable few-shot learning in the same way as MaPLe (N_CTX: 2, MAX_EPOCH: 30) [3], scaling to large NNs in evaluation. SGD is the default optimizer with LR 0.0035, and A6000 GPUs work for computations. We examine tail task risk minimization effectiveness on three large datasets. The class number split setup in datasets (num train/num val/num test) is TieredImageNet (351/97/160), ImagenetA [4] (128/32/40), and ImagenetSketch (640/160/200). We've attached the results in **Figure15-17** and analysis below: >Results are reported in **Fig. 15-17**. DR-MaPLe and DR-MaPLe+ consistently outperform baselines across both average and ${\rm CVaR}_\alpha$ indicators in $\texttt{5-way 1-shot}$ cases, demonstrating the advantage of the two-stage strategy in enhancing the robustness of few-shot learning. DR-MaPLe+ achieves better results as KDE quantiles are more accurate with large batch sizes. These results examine the scalability and compatibility of our method on large models. **3. Eq. (3) descriptions, larger font size in Fig. 1, more analysis, one typo and cross-reference style [# Reviewer ci3Y and # Reviewer 6Ej4]** (1) We've added Eq. (3) and other notation descriptions. See individual responses. (2) We've enlarged Fig. 1's font size in the updated version. (3) We've refined Theorem 4.1 to make our statement more precise, and time/space complexity analysis sees **Response to # Reviewer 6Ej4**. (4) Typos and equation-style reference: We've corrected the “resullting” typo to “resulting” and modified all cross-reference styles, e.g., Fig/Table/Theorem/Assumption/Example, by removing unnecessary brackets “()”, such as changing “Fig. (6)” to “Fig. 6”, “Theorem (4.2)” to “Theorem 4.2”, “Assumption (1)” to “Assumption 1”. We've proofread the manuscript many times, securing no typos and other informal reference style uses. **4. Other questions and opensource plan** Thanks and we leave other questions and answers in individual responses. *We'll opensource codes of large models' experiments to facilitate robust fast adaptation research in the updated manuscript.* **Reference** [1] Wang, Qi, et al. "A simple yet effective strategy to robustify the meta learning paradigm." NeurIPS 2023. [2] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." ICML 2021. [3] Khattak, Muhammad Uzair, et al. "Maple: Multi-modal prompt learning." CVPR 2023. [4] Hendrycks, Dan, et al. "Natural adversarial examples." CVPR 2021. --- ### **III. Manuscript Future Changes** **1. Include new experimental results with large model backbones.** **2. Add descriptions and discussions, and revise cross-reference styles as previously mentioned.** --- *Once again, thank you to all the reviewers and area chairs. Your effort means a lot in improving our manuscript.* Pdf: /pdf/913697ada4682a63384afb75a2558eef6aeac085.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Quantile Activation: Departing from single point estimation for better generalization across distortions
Reject
Summary: The paper introduces a novel activation function called Quantile Activation (QACT), which aims to improve the robustness of neural networks against various data distortions. The authors propose an end-to-end framework that combines QACT with modified loss functions and quantile classifiers, evaluating their approach on several benchmark datasets. Strengths: 1. The paper is well organized and clearly written. 1. The paper delivers useful empirical and theoretical insights. 1. The experimental results showcase the superiority of the proposed method. Weaknesses: 1. The proposed method seems a bit complex, which may lead to over-fitting in scenarios with limited training data. 1. Although the experiments are promising, it remains unclear how well the proposed method would scale to larger datasets or more complex tasks beyond those tested in the paper. 1. I feel that the evaluations are somewhat limited as only a few methods are compared against, lacking the most recent SOTAs. This limits the understanding of the real technical contribution of the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see above. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations should be discussed in the main paper, yet they are provided in the checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Overfitting:** Interestingly, while we do add a lot of operations, we do not actually increase the number of trainable parameters. Algorithm 1 in the article is a fixed function which depends on the entire batch of inputs and has no parameters. Hence, we do not expect overfitting any more than the underlying network itself. Further, one can deal with any overfitting by any of the conventional approaches - (a) Increasing the dataset size by adding more augmentations (b) Increasing regularization coefficients (c) Adding additional constraints to the loss function etc. **Scaling of Quantile Activation:** As discussed in the article, the quantile activation scales as O(n) vs O(1) of relu activation. However, observe that the operations within quantile activation are massively parallelizable. In fact, we only see 2X slowdown with quantile activation, even if scaling is of the order O(n). This can even be optimized by using better primitives. Hence, we do not think scaling these ideas to very large networks is problematic. Also, we do consider larger datasets such as miniImagenet, and show that our observations indeed hold at scale as well. We shall include a brief discussion in the final version of the article about scaling and the fact that quantile activation uses operations which can be massively parallelized. **Comparision with SOTA:** According to https://paperswithcode.com/sota/domain-generalization-on-imagenet-c, DinoV2 is the current state-of-the-art in domain generalization on ImageNet-C. We show that Resnet18 with 11M parameters trained solely on CIFAR-10 can have better performance than DinoV2-small, which is a distilled version of DinoV2-large with 1 billion parameters trained on extensive datasets, including ImageNet-21K . This comparison provides compelling evidence that quantile activation significantly enhances robustness to distortions. Note that the main contribution (and scope) of this article is to propose a novel activation function which improves robustness. We compare our approach with the current best known activations such as ReLU, pReLU and SeLU. Also note that, while there exists few other lines work improving robustness (lines 80-90), these ideas are tangential and complementary to the one proposed in this article. Our aim is to design architectural changes such that the network considers "context distribution" which results in better robustness to corruptions. Since QACT is a drop-in replacement for ReLU which is widely used, this is compatible with all other approaches as well . This is considered as future research. We already include a discussion on the contributions of this article. We shall improve this in the final version by including the scope and other relevant research as well.
Summary: The paper introduces Quantile Activation (QACT) to enhance classification model robustness against distributional shifts. Unlike traditional classifiers, QACT outputs the relative quantile of a sample in its context distribution, allowing for context-dependent classification. Validated on datasets like CIFAR10C and MNISTC, QACT improves generalization and robustness, outperforming state-of-the-art models like DINOv2 under large distortions. The paper details QACT's implementation and suggests future research directions, including scaling and exploring theoretical links to biological neurons. Strengths: The paper is notable for its originality in proposing a context-aware activation function, demonstrates high quality through extensive validation, and has significant potential for enhancing generalization in classification models. The innovative use of quantile-based activations opens new opportunities for research and applications in machine learning. Weaknesses: - Lack of Clarity on Context Dependency The concept of context dependency being batch-dependent is not clearly explained until the conclusion of the paper. This crucial detail should be introduced and elaborated on earlier to provide a better understanding of the method. - Unclear Motivation in Introduction The motivation and fundamental problem discussed in the introduction are not clearly articulated. The authors mention that, unlike NLP where context is considered, general classification systems do not incorporate context. However, in Vision Transformers (ViTs), image patches are treated similarly to words in NLP "[...The meaning of a word is dependent on the context of the word. However, to our knowledge, this has not been considered for general classification systems.]". In ViTs, an image patch is considered like a word, and the context comes from the other image patches. - Insufficient Related Work on Robustness The paper lacks a comprehensive review of related work concerning robustness to input distortions. Including a discussion of existing methods and how QACT compares or improves upon them would strengthen the paper. - Limited Comparative Analysis The comparison with other methods addressing robustness to input distortions is insufficient. The authors primarily compare QACT with DINOv2-Small, which is not a standard model for robustness. Including comparisons with other state-of-the-art methods specifically designed for robustness would provide a more complete evaluation of QACT's performance. Technical Quality: 3 Clarity: 2 Questions for Authors: N/A Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Discussion on Context Distribution and Batch Dependency:** Note that we obtain the context distribution from the other samples in the batch. This is discussed in lines 166-175 as well. We reiterate this in conclusion. **Transformers also considers context?** Note that transformers process one image at a time, by splitting it into 16x16 windows at a time, and thus has no information regarding the distribution of images itself. Specifically a transformer cannot distinguish between an image from a distorted distribution and an image from a different distribution -- This can only be judged if there are several images with such distortions and the network processes them all of them simultaneously. However, as discussed, the naive approach increases the computational complexity. We propose an efficient alternative in this article. **Related works** We discuss current approaches we are aware of that tackle robustness in lines 80-90. Please note that, to our knowledge, there is *no* work which tackles the question of context dependent outputs for vision. If the reviewer is aware of relevant work that we missed, we would appreciate if such references are provided in the comments. We shall include these in the final references. **Comparison with DinoV2** According to https://paperswithcode.com/sota/domain-generalization-on-imagenet-c, DinoV2 is the current state-of-the-art in domain generalization on ImageNet-C. We show that Resnet18 with 11M parameters trained solely on CIFAR-10 can have better performance than DinoV2-small, which is a distilled version of DinoV2-large with 1 billion parameters trained on extensive datasets, including ImageNet-21K, . This comparison provides compelling evidence that quantile activation significantly enhances robustness to distortions. Note that the main contribution (and scope) of this article is to propose a novel activation function which improves robustness. We compare our approach with the current best known activations such as ReLU, pReLU and SeLU. Also note that, while there exists few other lines work improving robustness (lines 80-90), these ideas are tangential and complementary to the one proposed in this article. Our aim is to design architectural changes such that the network considers "context distribution" which results in better robustness to corruptions. Since QACT is a drop-in replacement for ReLU which is widely used, this is compatible with all other approaches as well . This is considered as future research. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I have read the opinions of other reviewers, and I will maintain my score.
Summary: The authors propose a new activation function called quantile activation (QACT) which outputs the relative quantile of the sample in the context distribution. Furthermore, the paper validates the proposed activation across several experimental settings, and compare it with conventional techniques. They test robustness against distortions, and find that the proposed activation can achieve a significantly higher generalization across distortions than the conventional classifiers, across different architectures. Strengths: First, the authors develop existing approach in calibrating a pre-trained classifier to the level of a neuron. Thus, suitable forward and backward propagation equations required for learning are derived. Second, the authors also show that the extension can produce context dependent outputs at the level of each neuron of the neural network. Weaknesses: The writing of the paper meets the standard, but the notations are confusing. Nevertheless, it would be much better if the authors can polish and clarify them. For instance: 1. in line 107, the authors claim that ‘Assign $\mathbf y=1$ whenever $\mathbf y > (1-\tau)^{th}$ quantile of $\mathbf z$’. It seems that $\mathbf z$ is a vector and is impossible to have a vector be larger than a scalar. 2. The authors write $z_i$ and $\mathbf z_i$ alternatively to mean the same quantity. Similar situations occur when the authors write $z$ and $\mathbf z$ (see line 119, Eqn. (4)), or QACT$(\textbf z)$ and QACT$(\mathbf z)$ (see lines 119 and 124). 3. The authors use bold lowercase letters to represent vectors (e.g., Eq. (1)) and variable distributions (e.g., lines 105, 106). Also, what is the difference between bold lowercase letters and normal lowercase letters? Further clarifications can increase the readability of the paper. Technical Quality: 2 Clarity: 3 Questions for Authors: Below are some questions that want to ask. Please clarify: 1. In the backward propagation, it is essential to estimate the density. Nevertheless, there are many ways in estimating the density, and different ways have different computation complexity and the accuracy. Please justify your choice: e.g., recently generative models are prominent way to estimate the density. If the methods can produce better estimations without much computation time, it is worth to switch the kernel estimation to the generative approach. 2. Once when the samples are large enough, it may be necessary to train using batch. It is curious how to combine the estimations due to various batches. Could you describe how to integrate the estimations obtained from various batches? 3. The worst scenario is that the estimations due to various batches vary dramatically due to the batches chosen. If, unfortunately, that two batches are drawn such that they contain extreme values (e.g., batch 1 contains extreme small values while batch 2 contains extreme large values), how to solve the problem? 4. According to the forward and backward algorithms, the computation requires integration and kernel density estimation. Although the authors have analysed the computational complexity, they should also directly demonstrate in experiments the computational speed and memory of the proposed activation function compared to the classical activation function. Could they provide the relevant studies? The current score will be adjusted according to authors’ responses. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Please see the questions and weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and interesting questions. 1. **Generative Models vs. Quantile Activation:** We do agree with the reviewer that generative models are indeed one approach to estimating the density. As a simple case, one can estimate the mean/stdev for each neuron and use the re-parametrization trick. However, *within the context of this article*, there are a few advantages of quantile activation over generative models - (a) Normal distribution assumption is highly prohibitive in capturing long tails, which is something we expect to see in the presence of distribution shifts. - (b) Moreover, since we are working in 1 dimension (considering each neuron), the computation of quantiles and the quantile activation is very fast, and using generative models might not improve this further. - (c) Existing literature actively avoid using outputs from a generative model which depend on the other activations in the batch. 2. **Effect of Batches while training:** In this article, we assumed that each batch is representative of the underlying distribution. As the reviewer points out, if the models are large, then batch sizes could be small. There are a lot of computational tricks to mitigate this - (a) One can use checkpointing-like strategy in pytorch to increase the batch size, (b) One can have an additional assumption of a prior, and update the "context distribution" of each neuron using a simple sub-sampling method etc. Since the scope of this article is to verify the basic premise -- using context distribution will improve robustness, we consider these for future work. 3. **Computational Complexity of Quantile Activation:** As discussed in the article (lines 143-150), the quantile activation scales as O(n) vs O(1) of relu activation. On our system with - 10 CPUs, 256 GB RAM and a single 16 GB Nvidia RTXA6000 GPU - considering a batch size of 1024 (maximum) we see that relu activation based training is only 2x faster than quantile activation based. The key reason being, most of the quantile activation operations are massively parallel and can exploit the GPU for significant speedup. We shall improve the writing in the final version of the article. 4. **Writing:** We have used a element-wise vector operations in the article which may have caused some confusion. We will correct any discrepancies and make the notation conventions clearer in the final version.
null
null
Rebuttal 1: Rebuttal: We would like to take this opportunity to present a broad outline of our article and its contributions. **Broad Claim:** The central hypothesis of this article is that incorporating input samples from the context distribution when obtaining representations enhances robustness to distortions. However, a major challenge lies in the computational expense of naive methods. **Contribution:** To address this, we propose quantile activation, which takes the context distribution into account at the level of each neuron, thereby achieving context-sensitive representations at a fraction of the cost of naive methods. Algorithms 1 and 2 detail the forward and backward computation of our proposed method. **Empirical Validation** According to https://paperswithcode.com/sota/domain-generalization-on-imagenet-c, DinoV2 is the current state-of-the-art in domain generalization on ImageNet-C. We show that Resnet18 with 11M parameters trained solely on CIFAR-10 can have better performance than DinoV2-small, which is a distilled version of DinoV2-large with 1 billion parameters trained on extensive datasets, including ImageNet-21K, . This comparison provides compelling evidence that quantile activation significantly enhances robustness to distortions. Note that the main contribution (and scope) of this article is to propose a novel activation function which improves robustness. We compare our approach with the current best known activations such as ReLU, pReLU and SeLU. Also note that, while there exists few other lines work improving robustness (lines 80-90), these ideas are tangential and complementary to the one proposed in this article. Our aim is to design architectural changes such that the network considers "context distribution" which results in better robustness to corruptions. Since QACT is a drop-in replacement for ReLU which is widely used, this is compatible with all other approaches as well . This is considered as future research.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DiffPhyCon: A Generative Approach to Control Complex Physical Systems
Accept (poster)
Summary: This paper proposes an algorithm for controlling complex physical systems, particularly in long-term settings. The method is based on diffusion models and energy methods, utilizing data generated by traditional finite difference methods. Strengths: * The problem studied in this paper is important and well-motivated. * The presentation is clear and easy to follow. * Detailed experimental results of the proposed algorithm are presented in both main text and appendices. Weaknesses: * The method proposed in the paper does not demonstrate its strength in challenging tasks. I agreed with the claim that the fundamental challenge of simulating complex physical systems, which are high-dimensional and highly nonlinear. However, the current paper does not address these problems convincingly for the following two reasons: * First, both numerical examples are low-dimensional so that the curse of dimensionality does not affect much. * Second, their dynamics are not very complex. At first glance, the 2D Jellyfish task appears complex, as it involves solving the 2D Navier-Stokes equation, which might yield turbulent solutions. However, I realized that the optimization focuses on the jellyfish itself. I was wondering if there is a chance to obtain a good solution even if the simulation of fluid dynamics is not accurate. If this is the case, the results of this example may not provide strong evidence that the algorithm can handle complex dynamics. Additionally, this paper does not present the error analysis for the fluid dynamics simulation, only stats of energy comparison are provided. * This algorithm depends on the data generated by finite difference methods, hence suffering from the curse of dimensionality. Technical Quality: 2 Clarity: 3 Questions for Authors: * What is the energy term $E_\theta(u, w, c)$? * In the 2D Jellyfish task, what is the value of the kinematic viscosity $\nu$? * Is it possible to make the proposed algorithm independent of classical methods (e.g., the finite difference method used in this paper) for data generation? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: This paper has addressed its limitations in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive and detailed comments. We are glad that the reviewer finds our work clear and well-motivated with detailed results. Below, we address the reviewer’s questions one by one. > **Comment 1**: ...The current paper does not address these problems convincingly for the following two reasons: > >-- First, both numerical examples are low-dimensional so that the curse of dimensionality does not affect much. > >-- Second, their dynamics are not very complex. ...I was wondering if there is a chance to obtain a good solution even if the simulation of fluid dynamics is not accurate. If this is the case, the results of this example may not provide strong evidence that the algorithm can handle complex dynamics. Additionally, this paper does not present the error analysis for the fluid dynamics simulation, only stats of energy comparison are provided. **Answer**: Our tasks align with existing data-driven physical control papers [1-3]. Controlling Burgers' Equation is challenging due to shock wave tendencies, and our setup is more difficult with partial observation/control combinations. The 2D jellyfish control task includes challenges such as (1) asymmetric wake vortex formation from symmetric structures and motion modes [4]; (2) the nonlinear, complex interaction of vortices with the shell [5]. Still, to make our experiments more convincing, we **conducted new 2D incompressible fluid control experiments**, following a similar (but more challenging) setup of [3]. Control forces were applied within a 64x64 grid flow field, excluding a semi-enclosed region, to minimize the smoke failing to exit through the top middle exit (seven exits in total). For illustration of settings, please refer to subfigures (a) and (b) of Figure 1 in the PDF file at the end of **General Response**. This **high-dimensional indirect control problem** involves managing 2-dim forces at approximately 1,700 grid points per time step, resulting in about **100,000 control variables in total** across 32 time steps, making it highly challenging. We generated 20,000 training trajectories (including features of the smoke density field, velocity fields, control force fields and smoke proportion field) after filtering low quality trajectories. The average control objective in the training set is 49.9%. We evaluated 50 test samples, and the results are shown in the following Table 1. It shows that **our method still has significant advantages**, especially in validating the role of prior reweighting. **The average relative L2 error of fluid dynamics simulation of our method is 19.2%**. In particular, **the curse of dimensionality is effectively addressed by our method**. For **visualization** of the generated control and density map of smoke, please refer to subfigure (c) of Figure 1 in the PDF file at the end of **General Response**. These updates will be added to the final version of our manuscript. Table 1: Performance comparison on the new 2D indirect fluid control task. | Method | Control Objective $\downarrow$| | - | - | | BC | 0.3085| | BPPO | 0.3066| | SAC (pseudo-online) | 0.3212| | SAC(offline) | 0.6503| | DiffPhyCon-lite | 0.2324| | DiffPhyCon ($\gamma$=0.96) | **0.2254**| >**Comment 2**: This algorithm depends on the data generated by finite difference methods, hence suffering from the curse of dimensionality; > >**Comment 3**: Is it possible to make the proposed algorithm independent of classical methods (e.g., the finite difference method used in this paper) for data generation? **Answer**: Our method does not depend on the data generated by finite difference methods. Our method belongs to the domain of deep learning-based surrogate modeling, and like any other methods in this domain, is **decoupled from the data-generating process**. The data can come from classical solvers (not limited to finite difference methods) or actual physical observations, e.g., 1D data from finite difference methods while 2D data from the finite volume method [6] in our paper. Moreover, like any other deep learning-based surrogate modeling method, the goal is exactly to deal with complex, high-dimensional processes. It is exactly our generative control method that can better learn the high-dimensional dependencies from the given data. >**Comment 4**:What is the energy term $E_\theta (u,w,c)$? **Answer**: It is the parameterized energy-based model of the joint distribution of $p(u, w, c)$, where $w$ is the control sequence, $u$ is the state trajectory, and $c$ is the set of conditions, like initial conditionals and boundary conditions. $E_\theta(u,w,c)=-\log(p(u,w,c))+\text{const.}$ characterizes $p(u,w,c)$. Given $c$, we aim to generate $(u,w)$ that lies on the data manifold with high probability, with the optimal objective $J$. To achieve this goal, we train a diffusion model $\epsilon_\theta$ to approximate $\nabla E_\theta(u,w,c)$. During inference, the sampling starts from a Gaussion noise $(u_K, w_K)$ and travels along $\nabla E_\theta(u,w,c)$ by applying $\epsilon_\theta$ iteratively to achieve a final sample $(u_0,w_0)$ with near-minimal energy $E_\theta(u_K, w_K, c)$, under the guidance of $J$. >**Comment 5**: In the 2D Jellyfish task, what is the value of the kinematic viscosity ν? **Answer**: In our paper, ν is set to 0, considering an inviscid fluid, similar to other flow field control papers [3]. Our method can also be applied to viscous fluids and does not restrict the magnitude of ν. [1]Solving pde-constrained control problems using operator learning,2022. [2]Optimal control of PDEs using physics-informed neural networks (PINNs), 2021. [3]Learning to Control PDEs with Differentiable Physics, 2020. [4]Stable hovering of a jellyfish-like flying machine, 2014. [5]Propulsive performance and vortex dynamics of jellyfish-like propulsion with burst-and-coast strategy, 2023. [6]Conservative Volume-of-Fluid method for free-surface simulations on Cartesian-grids, 2010. --- Rebuttal Comment 1.1: Title: Response Comment: I would like to thank the authors for their rebuttal. When saying the curse of dimensionality, I was referring to the dimension of the state space, which is 2-dimensional, instead of the number of nodes. It would be more convincing to see your method performs well in high-dimensional tasks. I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback. We appreciate your clarification regarding the curse of dimensionality and acknowledge the importance of higher-dimensional tasks. However, our method is a generic control method for physical system control. It is **agnostic** to specific tasks, model architectures, and data generation methods. Based on our results on the 1D task, the 2D jellyfish control task, and the new 2D smoke control task during rebuttal, the effectiveness of our method is convincingly demonstrated. For 3D tasks, we believe our method is also applicable, provided that efficient solvers, effective model architectures, and sufficient offline data are accessible. Exploration on 3D tasks is left as future work. Thank you again for your valuable comments. We will continue to refine our work based on your suggestions.
Summary: This paper introduces Diffusion Physical Systems Control (DiffPhyCon), where diffusion models are used to generate a near-optimal controller for a system described by a partial differential equation (PDE). In this generative approach, a learned generative energy function and control objective is minimized. Additionally, a prior reweighting technique is developed to mitigate the effect of prior distribution of control sequences. Strengths: The paper is well-written and the method is well-motivated. The performance of the developed generative approach is demonstrated using extensive simulation studies on 1D Burgers equation and 2D Jellyfish movement control. The efficiency of this method is compared with a decent number of baselines. Additional experiments are provided to study the effects of hyperparameters. Weaknesses: The contribution of the paper is not clear compared to the following recently published result in ICLR: Wei, L., Hu, P., Feng, R., Du, Y., Zhang, T., Wang, R., Wang, Y., Ma, Z.M. and Wu, T., Generative PDE Control. In ICLR 2024 Workshop on AI4DifferentialEquations In Science. The method looks identical to the generative PDE control result in the above reference which is published and peer-reviewed. The additional contribution, if any, is too minor for NeurIPS. For this reason, I cannot recommend the paper to be accepted for NeurIPS. However, I do think this paper would have been a contribution to NeurIPS otherwise. ----- UPDATE: It appears the ICLR workshop is not considered an archival publication. With this consideration, I update my score to 7. Technical Quality: 4 Clarity: 4 Questions for Authors: Besides the big question of novelty in comparison to the "Generative PDE Control" paper from ICLR, I have the following comments for the authors: 1. The authors acknowledge the limitation that DiffPhyCon presently operates in an open-loop manner. This limitation by itself is not a problematic, but the claims made in the introduction need revision in that case. Due to robustness concerns, a controller is seldom implemented open-loop even for simple systems, let alone complex physical systems. However, open-loop optimal control design can be applied to motion planning or trajectory generation objectives. In the context of this paper, the generative approach could be used to construct a nominal pre-planned open-loop control sequence term which can then be added to a feedback term (e.g. PID). Thus, I suggest to revise the introduction to claim planning as the objective. For more information about motion planning, the authors are referred to Chapter 12 (Motion Planning for PDEs) from the reference: Krstic, M. and Smyshlyaev, A., 2008. Boundary control of PDEs: A course on backstepping designs. Society for Industrial and Applied Mathematics. 2. In line 72 of page 2, the authors claim "...DiffPhyCon facilitates global optimization of long-term dynamics...". It is not clear how the globality (or even non-triviality) of near-optimal control sequence is guaranteed to be achieved. I am not sure why Algorithm 1 would not yield local optima. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes, the authors have stated the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s valuable feedback and helpful suggestions. We are pleased to hear that the reviewer finds our paper well-written, well-motivated, and extensive in its results. Below, we address the reviewer’s questions one by one. >**Comment 1**: ... Thus, I suggest to revise the introduction to claim planning as the objective. For more information about motion planning, the authors are referred to Chapter 12 (Motion Planning for PDEs) from the reference: Krstic, M. and Smyshlyaev, A., 2008. Boundary control of PDEs: A course on backstepping designs. Society for Industrial and Applied Mathematics. **Answer**: Thanks for this constructive suggestion. Based on your advice and the discussions in the referenced literature, our problem setting indeed aligns more closely with a planning task, where future multi-step actions are generated at once. We will revise the introduction and other relevant sections of the paper to **change the keyword "control" to "planning"**. >**Comment 2**: In line 72 of page 2, the authors claim "...DiffPhyCon facilitates global optimization of long-term dynamics...". It is not clear how the globality (or even non-triviality) of near-optimal control sequence is guaranteed to be achieved. **Answer**: We apologize for not clearly explaining global optimization. By this, we mean treating the state trajectory and control sequences **over all physical time steps** as a single variable during diffusion/sampling. This approach is chosen because control objectives are typically defined over the entire time span. In particular, when the objective function includes several conflicting terms, our method can **relieve the myopic failure modes** [1] that may exist in reinforcement learning's iterative decision-making. For instance, in our 2D control task, maximizing the jellyfish's average speed requires sharp angle changes in the early stage, conflicting with minimizing the total energy cost $R(w)$. More details are in **Lines 302-307 and Appendix H.7** on myopic failure modes of SAC **in our original submission**. >**Comment 3**: I am not sure why Algorithm 1 would not yield local optima **Answer**: We do not guarantee that Algorithm 1 yields local optima. However, our **further theoretical study during rebuttal** shows that **prior reweighting** (Line 6 of Algorithm 1) **enhances the probability of obtaining global near-optimal solutions**. Informally speeking, for tasks with suboptimal training control sequences, there exists a hyperparameter $\gamma < 1$ such that using prior reweighting with this $\gamma$ increases the likelihood of obtaining near-optimal control sequences compared to not using this technique ($\gamma = 1$). Due to space limitations, for details of the formal statement of the theorem and proof, please refer to **Official Comment** on top of this webpage. [1] Janner, et al. "Planning with diffusion for flexible behavior synthesis.", ICML 2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. The response was convincing. I am conditionally increasing my score to 8, on the condition the authors actually revise the final version with the keyword planning instead of control. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback and support. We appreciate your willingness to increase the score based on our revisions. We will ensure that the final version of the paper reflects your suggestion by using the keyword "planning" instead of "control."
Summary: The model proposed a variant of diffusion models to control complex dynamical systems. Its contributions are threefold. First, the proposed model could optimize the trajectory and control sequence simultaneously. Second, it proposed a reweighting prior technique to generate a superior control sequence to the training dataset. Third, it contributes a benchmark dataset, jellyfish movement control, for the complex dynamical control community. Strengths: Novelty: The novelty lies in two parts. A variant of the diffusion model is proposed to minimize the energy function and control objectives simultaneously. Moreover, it proposes a prior reweighting algorithm to allow the sample to enhance the sampling of good but low-probability trajectories. Clarity: It is clear in illustrating the methodologies and the experiments. Quality: It provides comprehensive details on the background of the model and dataset. The experiment results substantiate its claim. Significance: The proposed method can potentially control complex systems with a generative model with reduced cost and good long-term control sequences. Weaknesses: Several questions need to be clarified; see details in the questions section. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How is the offline dataset generated? Does it include the optimal control trajectory? What if the offline collected training control trajectory is too far from the optimal one? I am trying to understand if your model "finds" the existing trajectory in the dataset or if it "stitches" different pieces of trajoetiers and generates one that doesn't exist in the dataset. 2. For the cases in this paper, using the online control algorithm is not too expensive. Online RL for these cases could also give global optimal control results. Could you compare the computational cost of your proposed method and an online RL method to demonstrate the benefits against the online RL algorithm? 3. For the diffusion-generated trajectory, how did you get Figure 4? Did you input the diffusion-generated trajectory to the numerical solver (Lilly-Pad) to get the physics simulation result? Or did you take the diffusion-generated flow field result as the "physics trajectory"? 4. I didn't quite get the intuition behind Figure 2. Why does reducing $\gamma$ to less than 1 shift the red point from margin to center? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are explicitly mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. We are glad that the reviewer recognizes the novelty, clarity, results, and significance of our work. Below, we address the reviewer’s questions one by one. >**Comment 1**. How is the offline dataset generated? Does it include the optimal control trajectory? What if the offline collected training control trajectory is too far from the optimal one? I am trying to understand if your model "finds" the existing trajectory in the dataset or if it "stitches" different pieces of trajoetiers and generates one that doesn't exist in the dataset. **Anwser**: As detailed in **Appendices C.1 and D of the original submission**, for our 1D and 2D data, we input initial conditions and control sequences with random variations into the solvers to generate trajectories. For instance, in the 2D case, control angles are cosine curves with varying amplitude, mean, and duty cycle. These random variations are necessary as expert control sequence data are hard to obtain for nonlinear and high-dimensional physical systems. Consequently, generated sequences are typically far from optimal. This issue **motivates our prior reweighting technique**, which reduces the weight of prior control sequences to increase the probability of sampling near-optimal solutions. In this rebuttal process, our **additional theoretical analysis** shows that for tasks with suboptimal training set control sequences, using prior reweighting with 𝛾<1 increases the likelihood of obtaining near-optimal control sequences compared to not using it (𝛾=1). Due to space limitations, for details of the formal statement of the theorem and proof, please refer to **Official Comment** on top of this webpage. Despite the offline training control sequences not being optimal, **diffusion models can combine segments of these trajectories** through conditional generation or control target guidance, as illustrated in [1]. In our experiments, a close look at Figure 4 reveals that although the entire curve has not appeared in the training set, its segments resemble parts of a cosine control curve from the training set. >**Comment 2**: For the cases in this paper, using the online control algorithm is not too expensive. Online RL for these cases could also give global optimal control results. Could you compare the computational cost of your proposed method and an online RL method to demonstrate the benefits against the online RL algorithm? **Anwser**: First, online training of SAC on our 2D Jellyfish control case is so **computationally demanding** that it is hardly feasible. However, we conducted a new experiment using SAC on our 1D Burgers' control task. The results are shown below. Our method is about **twice as fast as SAC and performs better**. | Method | Training time / hour | $J_{actual},\downarrow$ | | - | - | - | | DiffPhyCon (ours) | 4.4 | 0.01103| | DiffPhyCon-lite (ours) | 1.7 | 0.01139| | SAC-online | 10.5 | 0.01567 | | SAC-offline | 8.0 | 0.03201 | Second, the cases in our paper are for evaluation purposes. Our method can be applied to more complex tasks, such as the **new high-dimensional indirect fluid control task** (see Figure 1 in the PDF of General Response), where online RL training is also infeasible here due to high costs. Moreover, online RL is impractical in many settings, like dangerous exploration (e.g., autonomous underwater vehicles) [2], whereas offline learning can avoid these issues. Thus, **our method has broader applications compared to online RL**. >**Comment 3**: For the diffusion-generated trajectory, how did you get Figure 4? Did you input the diffusion-generated trajectory to the numerical solver (Lilly-Pad) to get the physics simulation result? Or did you take the diffusion-generated flow field result as the "physics trajectory"? **Anwser**: Figure 4 shows the control results of three jellyfish, each with different initial conditions. Control sequences obtained from each method are input into Lilypad solver to simulate the flow field trajectories, based on which we calculated the average movement speed and control objective $J$. Therefore, **we did not use the flow field trajectories generated directly by the diffusion model as the "physical trajectory"**. The reason for this is that the evaluation of these control methods, both our method and the baselines, should be based on simulated trajectory of the same solver, to compare them in a fair footing. **Details are provided in Appendix E.4 (Line 853 to Line 864) of our original submission.** >**Comment 4**: I didn't quite get the intuition behind Figure 2. Why does reducing 𝛾 to less than 1 shift the red point from margin to center? **Anwser**: We apologize for any confusion. In the figure, the red dot at the margin indicates a local minimum of $J$, while the red dot at the center represents the global minimum. In the joint distribution $p(u,w)=p(w)p(u|w)$, we assume the global minimum of $J$ has a lower probability than the local minima, as global optimal trajectories rarely appear in the training dataset. By setting 𝛾<1 and then normalizing by a constant $Z$, the distribution $p(w)^\gamma p(u|w)/Z$ flattens $p(u,w)$, increasing the probability of sampling at the global minimum. Thus, the figure illustrates this ideal situation: with 𝛾=1, we sample a local minimum by using DiffPhyCon-lite; with 𝛾<1, we sample the global minimum. However, this ideal scenario is only for intuitive illustration. Rigorous theoretical analysis shows that with 𝛾<1, the probability of sampling near the global minimum increases compared to 𝛾=1. Due to space limitations, for details of the formal theoretical analysis, please refer to **Official Comment** on top of this webpage. [1] Janner, et al. Planning with diffusion for flexible behavior synthesis., ICML 2022. [2] Levine, et al. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv:2005.01643, 2020. --- Rebuttal 2: Title: Rebuttal period Comment: Thanks for providing a detailed rebuttal to my questions and general proof. The quality and relevance of the rebuttal resolved my concerns. Together with the high-quality manuscript, which involves rich detail, it is definitely a paper beyond the acceptance level. Therefore, I would raise my score to 8. I would strongly encourage the author to publish the final version of the code to facilitate future research. --- Rebuttal Comment 2.1: Comment: Thank you for your positive feedback and for raising your score. We are delighted to hear that our rebuttal addressed your concerns and that you found the manuscript to be of high quality. We appreciate your encouragement to publish the final version of the code. We plan to release the code upon acceptance of the paper, and we will make sure to include clear documentation and instructions for usage. Once again, thank you for your constructive comments and support.
Summary: The method learns the energy function, $\epsilon_\theta$, which is used in the energy optimization target to generate the control sequences and system trajectory. A denoising network is trained to approximate the gradient of $\epsilon_\theta$. The network and optimization framework takes a global state of u and w of all times steps. The training and inference follow the process of a diffusion model. The paper also introduces prior reweighting to enable the discovery of control sequences that diverge significantly from training. Another network $\epsilon_\phi$ to learn the prior distribution of the control sequences is introduced. The proposed method is tested by two systems (1D Burgers’ equation and 2D jellyfish movement control). The experiments compared different control methods. Strengths: - The paper develops a generative method to control complex physical systems. The method optimizes system trajectory and control sequences jointly in the entire horizon by diffusion models. A prior reweighting technique is also proposed to improve the model's generalization ability. - The results of the jellyfish systems are strong. The proposed methods generates realistic control sequences and trajectory that align with the established findings in fluid dynamics. - The paper generated a dataset for the jellyfish system, which contributes to the benchmark of this area. Weaknesses: - The experiments are relatively limited - only include one 1D example and one 2D example. - I have minor reservations about the novelty of the paper since the primary method relies on the existing diffusion model. Technical Quality: 3 Clarity: 3 Questions for Authors: - Compared with other methods, does the proposed method use less, similar, or more time to train (for the learning-based methods) and inference? - Is it easy or hard to extend this method to 3D systems? In the 2D example, a 3D U-Net is used. Does this make it hard to apply this method to 3D systems as it requires a much larger 4D network? - In the prior reweighting method, are $\epsilon_\theta$ and $\epsilon_\phi$ trained jointly, or is $\epsilon_\theta$ trained first and then $\epsilon_\phi$? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discussed the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. We are glad that the reviewer appreciates our jellyfish results and dataset contribution. Below, we address the reviewer’s questions one by one. >**Comment 1**: The experiments are relatively limited - only include one 1D example and one 2D example. **Answer**: Our two tasks align with existing data-driven physical control papers [1-3]. For further evaluation, we **conducted an additional 2D incompressible fluid control experiment** using a setup similar to [3]. Control forces were applied within a 64x64 grid flow field, excluding a semi-enclosed region, to minimize the smoke failing to exit through the top middle exit. This **high-dimensional indirect control problem** involves managing 2-dim forces at approximately 1,700 grid points per time step, resulting in about **100,000 control variables** in total across 32 time steps, making it highly challenging. We generated 20,000 training trajectories, with average control objective 49.9%. We evaluated 50 test samples. The results (Table 1) show that **our method still has significant advantages**, especially in validating the role of prior reweighting. For **visualization** of the results, please refer to Figure 1 in the PDF file in **General Response**. These updates will be added to the final manuscript. Table 1: Performance comparison on the new 2D indirect fluid control task. | Method | Control Objective $\downarrow$ | | - | - | | BC | 0.3085| | BPPO | 0.3066 | | SAC (surrogate-solver) | 0.3212 | | SAC(offline) | 0.6503 | | DiffPhyCon-lite | 0.2324 | | DiffPhyCon ($\gamma$=0.96) | **0.2254**| >**Comment 2**: I have minor reservations about the novelty of the paper since the primary method relies on the existing diffusion model. **Answer**: Our method, while relying on diffusion models, introduces the following three key innovations: 1. **New Application Area**: We leveraged diffusion models' advantages, such as ease of generalization and global optimization over time, for physical system control. This establishes a new application area for diffusion models. 2. **Prior Reweighting Technique**: We addressed the challenge of generating control sequences that outperform those in the training set by introducing prior reweighting. This is a novel technique in the diffusion model literature, marking a significant technical contribution. 3. **Theoretical Analysis**: During the rebuttal, we conducted additional theoretical analysis: for tasks where training set control sequences are far from optimal, using a 𝛾<1 in prior reweighting increases the probability of obtaining near-optimal control sequences compared to not using the technique (𝛾=1). This aligns with our intuitive explanation in the original manuscript and will be included in the final manuscript. Due to space limitation, for formal theoretical analysis, please refer to **Official Comment**. >**Comment 3**: Compared with other methods, does the proposed method use less, similar, or more time to train (for the learning-based methods) and inference? **Answer**: The efficiency comparison for inference on two tasks was detailed in **Appendix I of the original submission**. Those resutls are combined with training time comparisons in the following Table 2 and Table 3. Here are the key points: - **Inference Time**: Our method is competitive among most methods, except SAC. By adopting the fast sampling method DDIM (DiffPhyCon-DDIM), our method's efficiency significantly improves, nearing the fastest models in the 1D task. - **Training Time**: The training time for our method is smaller (1D task) or comparable (2D task) to other learning-based methods. These details will be included in the final manuscript. Table 2: Training and inference time comparison on 1D task. | Method | Training Time / hours | Inference Time / seconds | | - | - | - | | DiffPhyCon-lite | 1.7 (1 A100-80G GPU, 8 CPUs) | 21.13 | | DiffPhyCon | 4.4 (1 A100-80G GPU, 8 CPUs) | 58.97 | | DiffPhyCon-DDIM (8 sampling steps) | 1.7 (1 A100-80G GPU, 8 CPUs) | 0.53 | | BPPO | 8.9 (1 V100-32G GPU, 12 CPUs) | 0.82 | | BC | 8.8(1 V100-32G GPU, 12 CPUs) | 1.22 | | SAC | 10.5 (1 A6000-48G GPU, 16 CPUs) | 0.11 | | SL | 2.6 (1 V100-32G GPU, 12 CPUs) | 74.85 | Table 3: Training and inference time comparison on 2D task. | Method | Training Time / hours | Inference Time / seconds | | - | - | - | | DiffPhyCon | 62 (2 A100-80G GPUs, 32 CPUs) | 252.2 | | DiffPhyCon-DDIM (50 sampling steps) | 62 (2 A100-80G GPUs, 32 CPUs) | 12.6 | | BPPO | 3.0 (1 A100-80G GPU, 16 CPUs) | 1.1 | | BC | 2.8 (1 A100-80G, 16 CPUs) | 1.0 | | SL | 52.1 (1 A100-80G GPU, 16 CPUs) | 133.5 | | SAC | 9.5 (1 A100-80G, 16 CPUs) | 0.2 | | MPC | 52.1 (1 A100-80G GPU, 16 CPUs) | 1401.7 | >**Comment 4**. Is it easy or hard to extend this method to 3D systems? In the 2D example, a 3D U-Net is used. Does this make it hard to apply this method to 3D systems as it requires a much larger 4D network? **Answer**: Our method is agnostic to the neural network backbone of the diffusion model. For a 3D system, we can switch to an appropriate neural network like a 4D U-net. Regarding efficiency, controlling 3D complex physical systems is challenging for all current methods. Although autoregressive models require only a 3D neural network, they evaluate iteratively at each physical time step [1,4]. Our method improves temporal efficiency by generating the full trajectory simultaneously. >**Comment 5**. In the prior reweighting method, are 𝜖𝜃 and 𝜖𝜙 trained jointly, or is 𝜖𝜃 trained first and then 𝜖𝜙? **Answer**: These two models can be trained simultaneously as they are independent. [1]Solving pde-constrained control problems using operator learning, 2022. [2]Optimal control of PDEs using physics-informed neural networks (PINNs), 2021. [3]Learning to Control PDEs with Differentiable Physics, 2020. [4]Reinforcement Learning with Function-Valued Action Spaces for Partial Differential Equation Control, 2018. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I initially had the same concern as Mp9v that using only 2D examples might not be sufficiently convincing. If applying to 3D grids, this method requires a 4D network which could be much harder to train. However, I agree with the authors' response that the existing 2D examples are complex enough, and the proposed method could be applicable to 3D tasks. Developing a more efficient network architecture for 4D data might be beyond the scope of this paper. The authors' response regarding the new application area and the prior reweighting technique also alleviated my concerns on novelty. Therefore, I raised my score to 6.
Rebuttal 1: Rebuttal: # General Response We thank the reviewers for their thorough and constructive comments, as well as AC's organization in reviewing our paper. We are glad that the reviewers think that our paper is **well-written** (DFgi, n6Yf, Mp9v) and **well-motivated** (n6Yf, Mp9v). Reviewers 8Er5 and DFgi recognized the **novelty of our paper**: a generative method to control complex physical systems, and a prior reweighting technique (8Er5 and DFgi). Reviewers appreciate the **strong results of the jellyfish systems** (8Er5) and the **comprehensive details of experiments** (DFgi, n6Yf, Mp9v). Reviewer 8Er5 also recognizes the **importance of our contributed dataset** for the jellyfish system. Based on the reviewers’ valuable feedback, we have **conducted several additional experiments** and **made additional theoretical analysis**. Below, we address the issues pointed out by the reviewers and resolve possible misunderstandings: 1. We **present an additional theoretical analysis of the prior reweighting technique** to illustrate its effectiveness. The theoretical conclusion is: for control problems where control sequences in the training set are far from optimal, there exists a $\gamma<1$ such that using the prior reweighting technique with this $\gamma$ shows improvement compared to not using it. For **details of the formal statement of the theorem and proof**, please refer to **Official Comment**. 2. We **introduce a new 2D incompressible fluid control task** to further demonstrate the effectiveness of our method, in response to Reviewer 8Er5 and Mp9v. This is a **high-dimensional indirect** control task, thus very challenging. Our method still shows **significant improvement compared to those baselines**. For details, please refer to the responses to Reviewers 8Er5 and Mp9v. For **visualization** of the generated control fields and the generated smoke density field, please refer to **Figure 1 in the attached PDF file**. 3. We **further clarify our contributions**, in response to Reviewer 8Er5's minor reservations about the novelty of the paper. Our contributions on top of the diffusion models include **three aspects**: expanding the application of diffusion models, proposing a novel prior reweighting technique, and providing theoretical analysis of the technique. For details, please refer to the responses to Reviewer 8Er5. 4. We **add online RL results of the 1D control task**. The results show that our method has higher training efficiency and outperforms online SAC. For details, please refer to the responses to Reviewer DFgi. 5. We **add a comparison of the training efficiency** of DiffPhyCon and those learning-based baselines. For details, please refer to the responses to Reviewers 8Er5. 6. We **agree to revise the Introduction and others related sections**, as suggested by Reviewer n6Yf. We will change the keyword "control" to "planning". **The above updates will be presented in our final manuscript**. We now individually address the concerns of reviewers. Please see the responses below each review. Pdf: /pdf/306db45f1bf87b66fc03724e7e5629b9d5e6e822.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Online Composite Optimization Between Stochastic and Adversarial Environments
Accept (poster)
Summary: This paper studies the problem of online composite optimization under the Stochastically Extended Adversarial (SEA) model, where the environments select loss functions in a manner that interpolates between fully stochastic and fully adversarial scenarios. Specifically, the authors establish regret bounds for three kinds of loss functions and introduce a multi-level universal algorithm that can achieve these bounds simultaneously, without prior knowledge of the loss functions. These theoretical guarantees are interesting as they can not only recover previous findings for online composite optimization in both fully stochastic and adversarial environments but also lead to new results, including the first problem-dependent bound for exp-concave cases in the full adversarial setting. Strengths: 1. The presentation is excellent, and the proof is readily comprehensible. 2. The investigated problem of online composite optimization between stochastic and adversarial environments is significant, as it not only bridges two previously separate fields in online composite optimization but also extends the special case without the regularizer. 3. The theoretical results are comprehensive and generalize previous findings in two extreme settings. Moreover, the authors clearly discuss the inadequacies of directly applying the existing work by Scroccaro et al. [2023] in the intermediate setting. Weaknesses: My only concern is that this article appears to be purely theoretical at present and lacks empirical validation. I believe that some experimental studies could further strengthen the theoretical results. Technical Quality: 3 Clarity: 4 Questions for Authors: For questions please refer to the weaknesses section. some suggestions: 1. I recommend relocating the Implications Section to follow the Universal Strategy Section. Such reorganization will enhance the logical flow and ensure the continuity of the narrative. 2. Currently, this paper primarily focuses on static regret minimization. I am curious whether we can establish the more general dynamic regret bounds in the investigated setting. I believe that theoretical insights into dynamic regret would also be significant for the field. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The authors discuss some limitations & future works in the Conclusion. And there is not potential negative societal impact, since this paper is purely theoretical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Many thanks for the constructive reviews!** --- **Q1**: I believe that some experimental studies could further strengthen the theoretical results. **A1**: Thanks for the helpful suggestion! We have conducted empirical studies to verify our theoretical findings. Detailed descriptions can be found in the **General Response**. --- **Q2**: I recommend relocating the Implications section to follow the Universal strategy section. **A2**: We truly appreciate your advice, and will relocate the Implications section in the revised version to make it more logically fluent. --- **Q3**: I am curious about whether we can establish the more general dynamic regret bounds in the investigated setting. **A3**: Thanks for the insightful question! Compared to the static regret minimization, minimizing the dynamic regret is more challenging since it needs to build a universal guarantee over any comparators. In the composite SEA, we have also attempted to optimize the dynamic regret. Theoretically, we can achieve a provable dynamic regret bound of $O((\sqrt{\sigma_{1:T}^2} + \sqrt{ \Sigma_{1:T}^2})(P_T+1))$, where $P_T$ denotes the path-length of the comparator sequence, and a tighter $O(P_T + (\sqrt{\sigma_{1:T}^2} + \sqrt{ \Sigma_{1:T}^2})\sqrt{P_T+1})$ dynamic regret bound based on the meta-expert framework. Both two results can also reduce to the purely adversarial and stochastic setting in online composite optimization with the specifications on $\sigma_{1:T}^2$ and $\Sigma_{1:T}^2$. We believe the two results can further broaden the field of composite SEA, and will add them in the extension version. --- We hope that our responses can address your concerns, and we are also happy to provide further clarifications during the reviewer-author discussions if necessary. --- Rebuttal Comment 1.1: Comment: I appreciate the author's rebuttal, and it has fully addressed my concerns. I believe this paper should be accepted, and therefore, I raise my score to 8. --- Reply to Comment 1.1.1: Comment: Many thanks for your kind response and supports! We will revise our paper according to your constructive suggestions.
Summary: This paper investigates online composite optimization in the regime between stochastic and adversarial environments, establishing theoretical guarantees for three types of time-varying loss functions: smooth and general convex, smooth and strongly convex, and smooth and exp-concave. These bounds not only extend previous results in online composite optimization but also recover those in the special setting without the regularizer. Furthermore, the paper introduces a universal algorithm for online composite optimization that is agnostic to the type of functions and attains optimal bounds for all three cases simultaneously. Strengths: - The paper is well-organized, with a clear presentation of the background, motivations, and methods. The proofs are also easy to follow. - The investigation of the intermediate setting between fully stochastic and adversarial environments is an important topic in the community. This paper enriches this line of work, especially in online composite optimization. The regret bounds relate to both stochastic and adversarial aspects of the environments (i.e., $\sigma_{1:T}^2$ and $\Sigma_{1:T}^2$), allowing adaptation to different environments automatically. - The authors provide intermediate examples and explain the implications of their bounds in these cases, further highlighting the significance of their findings. Weaknesses: - Although the theoretical results are substantial, the paper lacks experimental validation. It would be more compelling if the authors conducted experiments to validate their theoretical results. - This paper appears to be an extension of [Sachs et al., 2022; Chen et al., 2023]. The authors should highlight their technical contributions more clearly. Technical Quality: 4 Clarity: 4 Questions for Authors: - The proposed algorithms in this paper are based on optimistic OMD. Is it possible to achieve similar conclusions using optimistic FTRL? - When the function $f_t(\cdot)$ is non-smooth, can we still apply the proposed algorithms? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Many thanks for the constructive reviews!** --- **Q1**: Although the theoretical results are substantial, the paper lacks experimental validation. **A1**: Thanks for the helpful suggestion! We have conducted empirical studies to verify our theoretical findings. Detailed descriptions can be found in the **General Response**. --- **Q2**: The authors should highlight their technical contributions more clearly. **A2**: Thanks for the valuable suggestion! We have summarized our technical contributions in A1 to Reviewer qCfX. Additionally, we would like to emphasize that although some employed techniques are not entirely new, introducing them to the composite SEA model and modifying them to deliver favorable results remain highly non-trivial. --- **Q3**: Is it possible to achieve similar conclusions using optimistic FTRL? **A3**: Thanks for the insightful question! We believe that **in the static regret minimization, it is possible to achieve similar theoretical guarantees using optimistic FTRL under the composite SEA model**. The reason lies in that optimistic online learning methods, including optimistic OMD and optimistic FTRL, can yield gradient variation bounds, from which we are able to capture the stochastic and adversarial aspects of the environments with careful analysis. In the literature, optimistic FTRL-based methods, such as CAO-FTRL [Mohri and Yang, 2016], have been shown ensuring the gradient variation bounds in general convex and strongly convex cases of online composite optimization. In the exp-concave case, although no optimistic FTRL-based method currently secures the gradient variation bound, we believe achieving such a guarantee is feasible. Therefore, with *an in-depth technical analysis* on optimistic FTRL-based methods, it is possible to achieve similar theoretical results as those presented in our paper for composite SEA. However, we would like to emphasize that **in the dynamic regret minimization, our OMD-based methods offer more advantages compared to FTRL-based methods**. As stated in A3 to Reviewer wJwu, our methods can adapt to non-stationary environments and minimize the more general dynamic regret, whereas FTRL-based methods, to the best of our knowledge, have not been proven to ensure the similar theoretical results in online composite optimization. Moreover, we note that in the standard online optimization without the regularizer, Jacobsen and Cutkosky [2022] have shown that parameter-free variants of FTRL-based methods cannot deliver a dynamic regret bound better than $O(P_T \sqrt{T})$ where $P_T$ reflects the non-stationarity of the environments. This finding partially reveals the limitation of FTRL-based methods in the dynamic regret minimization. --- **Q4**: When the function $f_t(\cdot)$ is non-smooth, can we still apply the proposed algorithms? **A4**: Below, we discuss the importance of the smoothness in our methods, and the potential modifications for handling the non-smooth $f_t(x)$. - **Importance of smoothness.** In our analysis, the importance of the smoothness on $f_t(x)$ lies in decoupling the stochastic and adversarial aspects of the environments from the performance of algorithms. To be more precise, the smoothness is employed to extract $\sigma_{ 1:T }^2$ and $\Sigma_{1:T}^2$, each of which capture the stochastic and adversarial difficulties in the composite SEA model respectively, from the difference between $\nabla f_t(x_t)$ and $M_t$. By choosing the optimism $M_{t} = \nabla f_{t-1}(x_{t-1})$, such an extraction introduces only an additional stability term, i.e., $\sum_{t=2}^T || x_t - x_{t-1} ||_2^2$, which is typically controllable and can be retracted with careful analysis; - **Modification to non-smoothness.** To handle the non-smooth $f_t(x)$, a classical approach is to apply the implicit update [Campolongo and Orabona, 2020, Chen and Orabona, 2023], which replaces the linear approximation of $f_t(x)$ in the update rules with the original function $f_t(x)$ itself. Following the idea of the implicit update, one possible attempt is to modify the update rules of OptCMD in the following ways: \begin{align} \hat{x}\_{t+1}=&\arg\min\_{x\in\mathcal{X}}[\left<\nabla f_t(x\_t),x\right>+r(x)+\mathcal{B}^{\mathcal{R}\_t}(x,\hat{x}\_t)]\\\\x\_{t+1}=&\arg\min\_{x\in\mathcal{X}}[f_t(x)+r(x)+\mathcal{B}^{\mathcal{R}\_{t+1}}(x,\hat{x}\_{t+1})] \end{align} where $\langle M_{t+1},x\rangle$ is replaced with $f_t(x)$. As mentioned above, the smoothness of $f_t(x)$ is crucial for revealing the adversarial and stochastic aspects of the environments. Therefore, in the non-smooth setting, the primary challenge lies in designing an analysis strategy that can still capture the two aspects of the environments without using the smoothness. This seems highly non-trivial, and we plan to investigate it in future work. **Reference**. [1] A. Jacobsen and A. Cutkosky. Parameter-free mirror descent. COLT, 2022. [2] N. Campolongo and F. Orabona. Temporal variability in implicit online learning. NeurIPS, 2020. [3] K. Chen and F. Orabona. Generalized implicit follow-the-regularized-leader. ICML, 2023. --- We hope that our responses can address your concerns, and we are also happy to provide further clarifications during the reviewer-author discussions if necessary.
Summary: This paper investigates online composite optimization within the SEA model. Specifically, it demonstrates that by appropriately adjusting the predictor and step size of the algorithm known as OptCMD, similar regret bounds to those in online optimization within the SEA model can be achieved for general convex functions, strongly convex functions, and exp-concave functions. Additionally, the paper proposes extending the universal algorithm for online optimization by Yan et al. [2023] to online composite optimization. Strengths: - This paper studies a problem setting considered important for practical applications. - By making simple adjustments to existing methods, the desired theoretical results are obtained. - The comparison with existing research is detailed, especially with Scroccaro et al. [2023]. - Overall, it is well-written and easy to follow. Weaknesses: - The techniques proposed in this paper do not offer significant novelty. Technical Quality: 3 Clarity: 3 Questions for Authors: - If the regularizer $r(\cdot)$ is time-dependent, can the same regret upper bound be achieved with the current analysis? Or are there parts of the analysis that would require non-trivial adjustments? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations of this study are discussed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Many thanks for the constructive reviews!** --- **Q1**: The techniques proposed in this paper do not offer significant novelty. **A1**: We acknowledge that our research is partially inspired by existing studies, and some techniques employed may not be entirely novel. However, we would like to emphasize that introducing these techniques into the composite SEA model and modifying them to achieve favorable results remain highly non-trivial. Additionally, there also exist unique challenges in our investigated problem (see A1 to Reviewer ZP8k), which necessitate novel technical innovations to address them. In the following, we summarize our technical novelties. - **An in-depth analysis.** In the composite SEA, a straightforward approach is to simply adjust the parameters of OptCMD according to $(6)$, deliver the intermediate result by the original analysis in Scroccaro et al. [2023], and then take the expectations. However, this approach can only ensure much looser bounds compared to ours (see *A straightforward attempt* in A1 to Reviewer ZP8k). To address this issue, we reorganize and dig into the analysis of OptCMD. At the intermediate steps, we isolate the quantities related to $\sigma_{1:T}^2$ and $\Sigma_{1:T}^2$ (see Lines $505$-$508$ for the general convex case), and carefully manage the effect of expectations on them, ultimately achieving a series of favorable theoretical guarantees; - **A novel investigation.** Compared to the original study for OptCMD [Scroccaro et al., 2023], we further explore the exp-concave case, and provide an in-depth analysis for OptCMD with the *new* configurations $(10)$. It should be noticed that our result for the exp-concave case is versatile. On the one hand, it can reduce to the existing bound of $O(d \log T/(\alpha T))$ in the fully stochastic environments; on the other hand, it can deliver a tighter regret bound of $O((d/\alpha) \log V_T)$ than existing results in the purely adversarial environments. Detailed discussions can be found in Lines $245$-$250$; - **A crafted universal strategy.** Our universal algorithm is based on the two observations: *the time-invariance of $r(x)$* and *the accessibility of expert decisions for the current round*. These observations provide the foundation for utilizing information from $r(x)$ to track the expert performance. Based on them, we carefully design our universal algorithm with the two distinctiveness: (i) using the expert decisions for the current round to update weights; (ii) choosing the multi-level MsMwC with the composite surrogate loss and optimism in $(13)$ and $(14)$ as the meta-algorithm. With rigorous analysis, we demonstrate that our universal algorithm is able to deliver the desired regret bounds for three kinds of functions simultaneously, without prior knowledge of the function type. To the best of our knowledge, this is the *first* universal algorithm that can explicitly support composite loss functions. --- **Q2**: If the regularizer $r(\cdot)$ is time-dependent, can the same regret upper bound be achieved with the current analysis? Or are there parts of the analysis that would require non-trivial adjustments? **A2**: Thanks for the insightful question! We notice that in the purely adversary setting, there exist several efforts investigating the composite loss with the time-dependent regularizer [Scroccaro et al., 2023, Hou et al., 2023]. However, under the more general composite SEA model, analyzing the time-dependent regularizer is quite involved, and necessitates a more in-depth exploration. To be precise, we present the following updating rules of OptCMD for the time-dependent $r_t(x)$: \begin{align} \hat{x}\_{t+1}=&\arg\min\_{x\in\mathcal{X}}[\left<\nabla f_t(x\_t),x\right>+r_t(x)+\mathcal{B}^{\mathcal{R}\_t}(x,\hat{x}\_t)]\\\\x\_{t+1}=&\arg\min\_{x\in\mathcal{X}}[\left<M_{t+1},x\right>+P_{t+1}(x)+\mathcal{B}^{\mathcal{R}\_{t+1}}(x,\hat{x}\_{t+1})] \end{align} where $M_{t+1}$ and $P_{t+1}(x)$ denotes the estimations for $\nabla f_{t+1}(\mathbf{x}\_{t+1})$ and $r_{t+1}(x)$ in the next round, respectively. Then, we consider the following two scenarios: - **The known $r_t(x)$ setting**, in which $r_t(x)$ is determined and known to the learner in advance. In this setting, it is natural to choose $P_{t+1}(x) = r_{t+1}(x)$ at the round $t$. In this way, there is no estimation error on $r_t(x)$, so that we can achieve the same theoretical guarantees as those in our paper; - **The unknown $r_t(x)$ setting**, in which $r_t(x)$ is selected by the environments either stochastically, adversarially, or in a manner that interpolates between the two extremes, and is only revealed after the learner submits the decision $x\_t$. In this setting, we can only choose $P_{t+1}(x) = r_{t}(x)$ at the round $t$, and hence, an estimation error on $r_t(x)$, such as $F_T=\sum_{t=1}^T\sup_{x\in\mathcal{X}}|r_{t}(x)-r_{t-1}(x)|$, is inevitably introduced into the final regret bound. Therefore, to capture the variation of $r_t(x)$ in the composite SEA model, it is necessary to incorporate quantities in terms of $r_t(x)$ that capture the adversarial and stochastic aspects of the environments, which will require additional analysis. Furthermore, it should be noticed that $r_t(x)$ is assumed to be non-smooth, and thus, existing analytical techniques on $f_t(x)$ cannot be directly applied to $r_t(x)$, which also demands extra efforts. We are deeply grateful to the reviewer for raising the insightful question. We believe that investigating the time-dependent $r_t(x)$ will further advance the understanding of composite SEA, and we plan to explore it in our future research. **Reference.** [1] R. Hou et al. Dynamic regret for online composite optimization. ArXiv, 2023. --- We hope that our responses can address your concerns, and we are also happy to provide further clarifications during the reviewer-author discussions if necessary. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response. I will keep my score because the other reviews and the responses did not change my opinion that this paper mainly combines existing techniques (e.g., Yan et al. (2023), Sachs et al. (2022), and Chen et al. (2023)) and does not make a significant theoretical contribution. However, I believe that this paper makes a good contribution and is above the bar for acceptance. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your kind response and support! In the revised version, we will provide clearer statements of the technical novelties. Thank you again for your constructive comments!
Summary: The authors study the online composite optimization under the Stochastically Extended Adversarial (SEA) model proposed by Sachs et al., 2022. They show that optimistic composite mirror descent (OptCMD), a variant of OMD by [Scroccaro et al., 2023], can achieve regret guarantees that match existing bounds for the SEA model without the regularizer. The bounds are derived for various classes of convex losses including convex functions, exp-concave functions, and strongly convex functions. Furthermore, the authors also propose a universal algorithm that achieves optimal bounds without requiring prior information of the function class. Strengths: The paper is well-written and ideas are presented clearly. The authors extend the existing algorithm OMD by [Scroccaro et al., 2023] to the composite loss setting. The main novelty lies in the careful selection of the algorithm's parameters that lead to a good regret bound. I have not gone through the technical details very carefully but the results look interesting and extend the known results from noncomposite to composite loss setting. Weaknesses: While the results look nice, my main concern lies in evaluating the novelty of the paper. I would be interested in knowing the challenges faced by the authors to extend the existing analysis of OMD by [Scroccaro et al., 2023] to the composite setting. Especially, what technical challenges does the new setting throw beyond a different selection of algorithm's parameters? Do these challenges require the authors to come up with any technical innovation in their proofs? Technical Quality: 4 Clarity: 3 Questions for Authors: Please see the above. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: Please see the above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Many thanks for the constructive reviews!** --- **Q1**: What technical challenges does the new setting throw beyond a different selection of algorithm's parameters? Do these challenges require the authors to come up with any technical innovation in their proofs? **A1**: Thanks for the insightful question! We would like to emphasize that since we focus on the expected regret minimization in composite SEA, simply adjusting the algorithm's parameters is insufficient to achieve the desired bounds. In other words, it is also necessary to delve deeper into the analysis and make the corresponding modifications at intermediate steps to carefully deal with the expectation operation. To illustrate this clearly, we take the general convex case as an example. - **A straightforward attempt.** One can configure OptCMD with the parameters in $(6)$, obtain the regret bound of $\mathrm{Regret}\_T = O(\sqrt{V_T})$ by the analysis of Scroccaro et al. [2023], and subsequently apply the expectation operation, i.e., $\mathbb{E}[\mathrm{Regret}\_T] = O(\mathbb{E}[\sqrt{V_T}])$ to deliver the expected bound under the composite SEA model. However, it can be verified that such a straightforward approach can only deliver a much looser bound compared to ours; - **The underlying reason and our solution.** The underlying reason lies in that the above attempt only takes the expectation after the holistic analysis, neglecting the impact of expectation on intermediate results. To address this issue, we reorganize and dig into the analysis of OptCMD. At the intermediate steps, we isolate the quantities related to $\sigma_{1:T}^2$ and $\Sigma_{1:T}^2$ (see Lines $505$-$508$), and carefully manage the effect of expectations on them, ultimately achieving the desired bounds. Similar strategies are also applied in the strongly convex and exp-concave cases. Additionally, we also face the following challenges: - **Analysis for the exp-concave case.** In the original study for OptCMD [Scroccaro et al., 2023], the analysis is limited to the general convex and strongly convex cases. In our paper, we extend the investigation to the *new* exp-concave case, and provide an in-depth analysis for OptCMD with the *new* configurations $(10)$. It should be noticed that our theoretical result for the exp-concave case is versatile. On the one hand, it can reduce to the existing results of $O(d \log T/(\alpha T))$ in the fully stochastic environments; on the other hand, it can deliver a tighter regret bound of $O((d/\alpha) \log V_T)$ than existing results in the purely adversarial environments. Detailed discussions can be found in Lines $245$-$250$; - **Design and analysis for the universal strategy.** As we have elaborated in Lines $289$-$290$, the primary challenge is to develop a meta-algorithm capable of tracking experts according to their performance on composite losses, while maintaining the overall regret bound. To address this issue, we propose our universal algorithm, which is based on the two observations: *the time-invariance of $r(\cdot)$* and *the accessibility of expert decisions for the current round*. Consequently, we carefully design our universal algorithm with the two distinctiveness: (i) using the expert decisions for the current round to update the weights; (ii) choosing the multi-level MsMwC as the meta-algorithm and employing the composite surrogate loss and optimism in $(13)$ and $(14)$ to track the expert performance. In analysis, we reveal that although the additional regularization term $r(\cdot)$ is included in the surrogate losses, it can be safely controlled through meticulous analysis, thereby eliminating its impact on the meta-regret. The corresponding analysis can be found in Lines $614$-$617$ for the general convex case, Lines $635$-$643$ for the strongly convex case and Lines $655$-$658$ for the exp-concave case. It should be emphasized that although some techniques employed in our work may not be entirely novel, introducing them to the composite SEA model and modifying them to deliver favorable results remain highly non-trivial. --- We greatly appreciate your constructive review and helpful feedback, and we will offer more detailed descriptions of the technical challenges and innovations in the revised version. We hope that our responses can address your concerns, and we are also happy to provide further clarifications during the reviewer-author discussions if necessary.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive comments and appreciations of our work. Since both Reviewers tdoW and wJwu suggest conducting experiments to validate our theoretical results, we present the following general response regarding the empirical studies. --- **Setup.** In our paper, we show that OptCMD with suitable configurations is able to achieve a series of favorable theoretical guarantees in composite SEA. Moreover, for the practical scenarios where the prior knowledge of loss functions is unavailable, we propose a novel universal strategy, called USC-SEA, which can achieve the desired regret bounds for three cases in composite SEA simultaneously. To verify our theoretical findings, we conduct experiments on the mushroom datasets from the LIBSVM repository [Chang and Lin, 2011], and consider the following online classification problem. At each round $t \in [T]$, the learner receives a data $(\mathbf{x}_t, y_t) \in \mathbb{R}^d \times$ {$-1, 1$} with $d = 112$. Then, the learner plays the decision $\mathbf{w}_t$ from the ball $\mathcal{X}$ with the diameter $D=20$, and suffers a composite loss $$\phi_t(\mathbf{w}_t; \mathbf{x}_t, y_t) = f_t(\mathbf{w}_t; \mathbf{x}_t, y_t) + \lambda r(\mathbf{w}_t),$$ where we set the hyper-parameter $\lambda = 0.001$. The dataset used in the experiments is considered to be sampled from an unknown distribution, possessing the inherent stochastic property. To simulate the stochastically extended adversarial environments, we perturb the dataset by randomly flipping the labels of $10$% data as the adversarial corruptions. Then, we sequentially pass each data to the learner. We consider the following three types of loss functions. - For the general convex case, we choose the smooth and convex cross-entropy function as the time-varying function: $$f_t(\mathbf{w}_t; \mathbf{x}_t, y_t) = -y_t \log \sigma (\mathbf{x}\_{t}^\top \mathbf{w}_t) - (1-y_t) \log (1-\sigma(\mathbf{x}\_{t}^\top \mathbf{w}_t)),$$ where $\sigma(\cdot)$ denotes the sigmoid function, and utilize the $\ell_1$-norm regularizer $r(\mathbf{w}_t) = ||\mathbf{w}_t||_1$. Therefore, the composite function takes the form of:$$ \phi(\mathbf{w}_t; \mathbf{x}_t, y_t) = -y_t \log \sigma(\mathbf{x}\_{t}^\top \mathbf{w}\_t) - (1-y_t) \log (1-\sigma(\mathbf{x}\_{t}^\top \mathbf{w}_t)) + \lambda ||\mathbf{w}_t||_1;$$ - For the strongly convex case, we employ the cross-entropy functions with the $\ell_2$-norm regularizer as the time-varying function: $$f_t(\mathbf{w}_t; \mathbf{x}_t, y_t) = -y_t \log \sigma (\mathbf{x}\_{t}^\top \mathbf{w}_t) - (1-y_t) \log (1-\sigma(\mathbf{x}\_{t}^\top \mathbf{w}_t))+\delta||\mathbf{w}_t||_2^2,$$which is $2\delta$-strongly convex, and still leverage the $\ell_1$-norm regularizer. Hence, the composite loss function is in the form of:$$ \phi(\mathbf{w}_t; \mathbf{x}_t, y_t) = -y_t \log \sigma(\mathbf{x}\_{t}^\top \mathbf{w}\_t) - (1-y_t) \log (1-\sigma(\mathbf{x}\_{t}^\top \mathbf{w}_t))+\delta||\mathbf{w}_t||_2^2+\lambda||\mathbf{w}_t||_1;$$ - For the exp-concave case, we utilize the logistic function as the time-varying function:$$f_t(\mathbf{w}_t; \mathbf{x}_t, y_t) = \log (1 + \exp (-y_t \mathbf{w}_t^\top \mathbf{x}_t)),$$which is exp-concave and smooth [Hazan et al., 2014], and still employ the $\ell_1$-norm regularizer. The composite loss function is shown below:$$\phi(\mathbf{w}_t; \mathbf{x}_t, y_t) = \log (1 + \exp (-y_t \mathbf{w}_t^\top \mathbf{x}_t)) + \lambda ||\mathbf{w}_t||_1.$$ --- **Contenders.** For the general convex and strongly convex cases, we compare our methods with OGD [Zinkevich, 2003], COMID [Duchi et al., 2010] and Optimistic-OMD [Chen et al., 2023]. For the exp-concave case, we choose ONS [Hazan et al., 2007], ProxONS [Yang et al., 2023] and Optimistic-OMD [Chen et al., 2023] as the contenders. All parameters of each method are set according to their theoretical suggestions. For instance, in the general convex case, the learning rate is set as $\eta = ct^{-1/2}$ in OGD, and $\eta = cT^{-1/2}$ in COMID, $\eta_t = D (c + \bar{V}_{t-1})^{-1/2}$ in Optimistic-OMD where $c$ denotes the hyper-parameter selected from {$10^{-3},10^{-2},\cdots,10^{4}$}. --- **Results.** All experiments are repeated ten times, and we report the instantaneous loss, the cumulative loss and the average loss against the number of rounds in Figure 1 for the general convex case, Figure 2 for the strongly convex case and Figure 3 for the exp-concave case. From the experimental results, it is evident that adversarial corruptions cause considerable fluctuations in the instantaneous loss across different methods. Moreover, we observe that in the three cases of composite SEA, OptCMD incurs lower losses compared to baseline methods, and our universal method, USC-SEA, achieves similar performance to OptCMD without the prior knowledge of loss functions. This phenomenon can be attributed to their ability to adapt to the composite SEA environment, and their explicit support for handling the non-smooth component $r(\cdot)$. **Reference.** [1] C.-C. Chang and C.-J. Lin. Libsvm: A library for support vector machines. ACM TIST, 2011. [2] E. Hazan et al. Logistic regression: Tight bounds for stochastic and online optimization. COLT, 2014. [3] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. ICML, 2003. [4] E. Hazan et al. Logarithmic regret algorithms for online convex optimization. FTML, 2007. Pdf: /pdf/81ca5e08371c3d3c910e139a2d28227ef1ed4c33.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
Accept (poster)
Summary: The paper proposes an algorithm to learn a (low-rank) variational posterior distribution over (a subset of) LLM weights during fine-tuning by combining Low-Rank Adaptation (LoRA) and Bayes by Backprop (BBB). To make the algorithm work in practice, several nontrivial modifications are introduced, such as certain parameterisations and optimisation strategies. Finally, the proposed method is empirically evaluated on a variety of in-distribution and out-of-distribution benchmarks and compared to relevant baselines. Strengths: - The paper is well written and the proposed algorithm is contextualised into existing work, with Figure 1 providing a great visual overview. - Concepts are introduced in an adequate pace and order, and a clean notation makes it easy for the reader to follow. - Elaborate empirical evaluation seems to suggest (somewhat) consistent gains over baseline approaches. - While the proposed algorithm seems like a straightforward combination of LoRA and BBB, the paper mentions that certain modifications are necessary for the algorithm to work in practice. These modifications are a core part of the paper's contribution, and they are discussed thoroughly and justified with a mix of theoretical and empirical arguments. Weaknesses: - The proposed method requires additional compute time and memory compared to standard LoRA. While this information is provided in Appendix C.2 (and seems to be small), I encourage the authors to be transparent about this by briefly mentioning it in the main paper (if that is not already the case, perhaps I missed it). Technical Quality: 4 Clarity: 4 Questions for Authors: - Appendix B.1 mentions weights for the KL divergence term. How important is this? Please correct me if I am wrong, but I believe this was not explicitly discussed in the main paper. If this is important for the overall performance of the algorithm (and typically these kind of adjustments tend to be...), I suggest to at least mention it in the main paper (even if details remain in the appendix). - Is there any particular reason why $\mathbf{A}$ is modelled in a probabilistic way instead of $\mathbf{B}$? In other words, would it also be possible to do the reverse, i.e. assume a prior over $\mathbf{B}$ and treat $\mathbf{A}$ deterministically? The choice seems arbitrary to me, but perhaps there is an underlying reason to it. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Theoretical assumptions are stated as part of the provided theorems. As far as I am aware, general limitations of the proposed method (beyond the ones which are being addresses by the proposed method itself) are not explicitly discussed and could at least be briefly mentioned in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive and encouraging comments as well as the insightful questions. We are glad that you find our method ``"justified"`` by ``"theoretical and empirical arguments"``, our paper ``"well written"``/``"easy for the reader to follow"``, and the empirical evaluation showing our method's ``"consistent gains over baseline approaches"``. Additionally, we appreciate your recognition that our proposed necessary modifications are ``"a core part of the paper's contribution"``. Below we will address your questions in detail. **W1. The proposed method requires additional compute time and memory... by briefly mentioning it in the main paper...** This is a good suggestion. Following to your and Reviewer Adn3's suggestion, we conducted an additional study on the post-training computational cost for different methods. We calculate the inference time and maximum memory usage for LAP, standard LoRA, and BLoB. The results are presented in the table below: | Metric | Method | WG-S | ARC-C | ARC-E | WG-M | OBQA | BoolQ | | --------------- | ------------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | | | **Standard LoRA** | 17 | 5 | 8 | 17 | 7 | 58 | | **Inference Time (Seconds)** | **LAP** | 311 | 445 | 814 | 554 | 1165 | 2508 | | | **BLoB (N=10)** | 193 | 45 | 86 | 193 | 75 | 627 | || | | **Standard LoRA** | 14391 | 14157 | 13081 | 14391 | 14391 | 14160 | | **Max Memory (MB)** | **LAP** | 43742 | 61881 | 64737 | 43678 | 55642 | 67364 | | | **BLoB (N=10)** | 14171 | 14411 | 13911 | 14407 | 14478 | 14177 | || | | **Standard LoRA (MLE)** | 29.83 (0.58) | 29.00 (1.97) | 13.12 (1.39) | 20.62 (0.74) | 12.55 (0.46) | 3.18 (0.09) | | **ECE** | **LAP** | **4.15 (1.12)** | 16.25 (2.61) | 33.29 (0.57) | 7.40 (0.27) | 8.70 (1.77) | **1.30 (0.33)** | | | **BLoB (N=10)** | 9.35 (1.37) | **9.59 (1.88)** | **3.64 (0.53)** | **3.01 (0.12)** | **3.77 (1.47)** | 1.41 (0.19) | || | | **Standard LoRA (MLE)** | 3.17 (0.37) | 2.85 (0.27) | 1.17 (0.13) | 0.95 (0.07) | 0.73 (0.03) | 0.32 (0.00) | | **NLL** | **LAP** | **0.63 (0.00)** | 1.03 (0.04) | 1.38 (0.01) | 0.57 (0.01) | 1.00 (0.00) | 0.45 (0.00) | | | **BLoB (N=10)** | **0.63 (0.01)** | **0.78 (0.02)** | **0.40 (0.01)** | **0.54 (0.00)** | **0.50 (0.01)** | **0.31 (0.00)** | These results show that compared to LAP, our BLoB can achieve comparable or better ECE and NLL with **less inference time** and **less memory usage**. Notably, our BLoB's memory overhead compared to standard LoRA is minimal, while LAP introduces significant memory overhead. We will include the discussion above in the main body of the revised paper as suggested. **Q1. ...the KL divergence term. How important is this? ... this was not explicitly discussed in the main paper...** This is a good question. Indeed, the KL divergence reweighting is crucial to BLoB's final performance. Due to space limit, we initially relegated this discussion to the Appendix. In our revision, we will present a comprehensive ablation study covering our novel KL reweighting schedule, parameterization, and asymmetric Bayesianization of LoRA, as suggested. **Q2. Is there any particular reason why $A$ is modelled in a probabilistic way instead of $B$?...** Thank you for this insightful question. As discussed in Section 3.1 (lines 129-131), BLoB models $A$ probabilistically rather than $B$ due to the conventional LoRA initialization: $A\leftarrow \text{random init}$, $B\leftarrow 0$. The choice is indeed arbitrary; this asymmetry could be reversed (i.e., model $B$ probabilistically instead) if the LoRA initialization were flipped (i.e., $A\leftarrow 0$, $B\leftarrow \text{random init}$). **Q3. Limitations** Please refer to **Q1. [Discussion of Limitations]** in **General Response**. --- Rebuttal Comment 1.1: Comment: Thank you for your response and for providing concrete numbers for inference time and memory consumption. I encourage you to include those in your manuscript. I am also glad to hear that you will present an ablation study about your KL reweighing schedule, as this is "crucial to BLoB's final performance". I am maintaining my score as I believe it is appropriate. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you very much for the encouraging comments and for acknowledging our contribution. We are glad that our response has been helpful and convincing. We will include the discussion and ablation study in the revision based on your suggestion.
Summary: This paper proposes a method called Blob, which uses mean-field variational inference on LoRA parameters to obtain Bayesian estimation on LLM. This will result in a richer posterior structure on the weight than diagonal, while maintaining the same computational cost as diagonal covariance. During training, a new parameterization is used for posterior covariance to accelerate convergence and Flipout is used to make training more stable. Empirical results show Blob obtains competitive performance compared with other Bayesian methods. Strengths: - By putting a diagonal prior and approximate posterior on LoRA parameters, it leads to a richer posterior covariance structure while maintaining the computational cost of diagonal covariance. - The new parameterization of standard deviation seems promising for all VI training. - Good performance compared with other Bayesian methods. Weaknesses: - Not clear about the method's limitations. Since it uses VI, common disadvantages of VI also apply: (1) it needs to do sampling when making predictions, which can be computationally expensive given the size of LLM. (2) Given the complex posterior structure, there will be a very high chance of getting noise samples, which leads to suboptimal performance. This is already demonstrated in Table 1 that using MC estimation decreases the ACC compared with only using the mean. In additional, it seems in practice many extra efforts are needed to get the method to work, e.g., the complex KL re-weighting and rescale size of training data mentioned in Appendix B.1. - The whole paper feels a bit overselling: (1) I don't see the need to make Theorem 3.1 and Theorem 3.2 theorems, nothing surprising is stated there. (2) In Line 318, the authors claim their method "demonstrates superior generalization and uncertainty estimation capabilities", I don't think the results justify the "superior" claim. (3) In essence, Blob is mean-field VI on LoRA parameters trained with better strategies to make it suffer less from VI's high variance. I have no problem at explaining methods in great detail, but the whole writing on page 4 and page 5 feels like the authors are trying to make it sound as fancy as possible. I like the idea of the paper, but I think it can be presented in a more honest way. Technical Quality: 2 Clarity: 2 Questions for Authors: Major: - I'm not sure what it mean for the mean matrix of q(A) to be an output of a neural network (Line 186-187). Does this mean you are using another neural network to fit the mean of posterior? - In Eq. (70), it seems different mini-batch has different KL re-weighting hyperparamers lambda? How sensitive is the training to the value of lambda? - What is the difference between BBB and Blob? From my understanding, the only difference is how you do the training. For BBB you use original parameterization and for Blob you use new parameterization plus Flipout? Also how many samples did you use to evaluate BBB in Table 1? - I would be interested to see a figure plot of model performance versus the number of MC samples. Specifically, the y-axis is accuracy/ECE/NLL, the x-axis is the number of samples used during inference. This helps the reader to understand the sufficient sampling size for inference. - I would be interested to see Laplace added to Table 8 to show a fair comparison of the computation cost compared with Blob. - Why is Laplace performing so poorly on ARC-C dataset? The original Bayesian LoRA paper it seems to perform much better on that dataset. Minor: - Line 236, MLE and MAP are not uncertainty estimation methods. - I don't entirely agree with the claim made in Lines 46-47. The possible suboptimal estimation of Laplace approximation comes from (1) LA, which assumes we reach MAP estimation while in practice you never know if the complex NN has really converged to MAP; (2) setting prior precision. It doesn't come from the post-training procedures. - the notation "n" refers to two different things, at Eq. (4) it's the number of parameters, and at Line 153 it's the dimension of the weight matrix. - Bolding is missing on ACC in Table 7. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See weaknesses and questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and insightful questions. We are glad that you like our idea and find our method ``"promising"`` and its performance ``"good"``/``"competitive"``. Below we address your major questions (W2, W3, Q3, and Q5) in detail. The remaining questions will be answered in a separate **Official Comment** after this **Rebuttal** post. **W2. It seems in practice many extra efforts are needed to get the method to work, e.g., the complex KL re-weighting and rescale size of training data mentioned in Appendix B.1.** Thanks for mentioning this. Our KL re-weighting technique ensures the model fits the data well while converging to the prior distribution, akin to the BBB approach [1]. By rescaling the training data size, we maintain consistent hyperparameters across varying dataset sizes. As Reveiwer AGd6 noted, these adjustments are not limitations but key technical contributions (``"a core part of the paper's contribution"``) that enable BLoB to work effectively across diverse datasets. [1] Blundell, Charles, et al. "Weight uncertainty in neural network." International conference on machine learning. PMLR, 2015. **W3.1. I don't see the need to make Theorem 3.1 and Theorem 3.2 theorems, nothing surprising is stated there... I like the idea of the paper, but I think it can be presented in a more honest way.** This is a good question. Actually our Theorems 3.1 & 3.2 are necessary because they + shows that with a proper $\tilde{R}$, one can compute the KL divergence for the high-dimensional full weight vec($W$) simply by computing the KL divergence for $A$, which is much lower-dimension, more parameter-efficient, more memory-efficient, and faster, and + investigate important theoretical properties of our BLoB, offering insights into its underlying assumptions and advantages. We will include the clarification above in our revision as suggested. **W3.2. ...I don't think the results justify the "superior" claim.** For the performance of BLoB (N=10), in the ID experiments in Table 1, it achieves the best ECE and NLL performance on almost all datasets, with similar ACC compared to MLE and MAP. In the OOD experiments in Table 2, BLoB ($N=10$) achieves the best performance in 7 out of 12 metrics across the four datasets, while the second-best LAP method achieves the best performance in only 4 metrics. The results clearly demonstrate that BLoB outperforms other baseline methods. **Q3. What is the difference between BBB and BLoB?... How many samples are used to evaluate BBB** The key distinctions between BLoB and BBB are: 1. **Asymmetric Bayesianization (AB):** BLoB models the approximate variational distributions for only one LoRA component $A$. This technique is crucial, as classic BBB consistently fails on various datasets without it. In fact, to produce meaningful results for BBB in Tables 1 and 2, BBB has to incorporate our proposed AB, giving it the baseline BBB an unfair advantage. 2. **Novel Parameterization and Training:** BLoB introduces a new method to parameterize the standard deviation of matrix $A$ and employs a different KL-reweighting strategy during training. Table 1 demonstrates that these innovations are essential for successful uncertainty estimation. As mentioned by Reviewer AGd6, these certain distinctions ``"are necessary for the algorithm to work in practice"`` and ``"are a core part of the paper's contribution, and they are discussed thoroughly and justified with a mix of theoretical and empirical arguments."`` **Number of Samples.** For fair comparison, both BLoB and BBB use $N=10$ samples in Table 1. We will clarify this in the revision. **Q5. I would be interested to see Laplace added to Table 8 to show a fair comparison of the computation cost compared with Blob.** This is a good question. We divide the computational cost into two phases: training and post-training (inference on test data). LAP's training cost is equivalent to LoRA's, as shown in Table 8, since it only involves MAP training. For a comprehensive comparison of post-training computational costs, we calculate the inference time and maximum memory usage for LAP, standard LoRA, and BLoB. The results are presented in the table below. The complete table, including the corresponding ECE and NLL for reference, is provided as **Table 1** in the PDF attached to the **General Response**. | Metric | Method | WG-S | ARC-C | ARC-E | WG-M | OBQA | BoolQ | | --------------- | ------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | | **Inference Time (Seconds)** | **Standard LoRA** | 17 | 5 | 8 | 17 | 7 | 58 | | | **LAP** | 311 | 445 | 814 | 554 | 1165 | 2508 | | | **BLoB (N=10)** | 193 | 45 | 86 | 193 | 75 | 627 | | **Max Memory (MB)** | **Standard LoRA** | 14391 | 14157 | 13081 | 14391 | 14391 | 14160 | | | **LAP** | 43742 | 61881 | 64737 | 43678 | 55642 | 67364 | | | **BLoB (N=10)** | 14171 | 14411 | 13911 | 14407 | 14478 | 14177 | These results show that compared to LAP, our BLoB can achieve comparable or better performance with **less inference time** and **less memory usage**. Notably, our BLoB's memory overhead compared to standard LoRA is minimal, while LAP introduces significant memory overhead. **The remaining questions are answered in a separate Official Comment titled "Remaining Questions for Reviewer Adn3" below.** --- Rebuttal 2: Title: Remaining Questions for Reviewer Adn3 Comment: **W1. Not clear about the method's limitations.** Please refer to **Q1. [Discussion of Limitations]** in **General Response**. **Q1. What it mean for the mean matrix of q(A) to be an output of a neural network (Line 186-187)?** We are sorry for the typo. $q(A)$ is not the output of a neural network. It is directly modeled as a Gaussian distribution $\mathcal{N}(M, \Omega)$, with $M$ and $\Omega$ as learnable parameters. **Q2. In Eq. (70), it seems different mini-batch has different KL re-weighting hyperparamers lambda? How sensitive is the training to the value of lambda?** Yes, different mini-batches has different $\lambda$ values. When we use the typical settings of $\lambda_i = \frac{2^{M-i}}{2^M-1}$ [1] or $\lambda_i = \frac{1}{M}$ [2], the model struggles to fit the data distribution while converging to the prior distribution, resulting in an "NaN" NLL loss. Our setting of $\lambda_i = \frac{2^i}{2^M-1}$ allows the model to fit to the data in the early stages of training and achieve a reasonable level of complexity cost in the later stages. This re-balance scheme of data likelihood and complexity cost has proven effective across multiple datasets and can be potentially applied to other VI methods. We will include the discussion above in the revision as suggested. [1] Blundell, Charles, et al. "Weight uncertainty in neural network." International conference on machine learning. PMLR 2015. [2] Graves, Alex. "Practical variational inference for neural networks." NeurIPS 2011. **Q4. I would be interested to see a figure plot of model performance versus the number of MC samples. ...** This is a good suggestion. Following your suggestion, we have run additional experiments for model performance versus the number of MC samples on WG-S dataset. The results are presented in the table below. The corresponding figure is provided as **Figure 1** in the PDF attached to the **General Response**. | | **N=0** | **N=1** | **N=2** | **N=3** | **N=4** | **N=5** | **N=10** | **N=20** | **N=40** | **N=80** | **N=160** | | ---------------------- | ------- | ------- | ------- | ------- | ------- | ------- | -------- | -------- | -------- | -------- | ------ | | **ACC ($\uparrow$)** | 71.44 | 65.20 | 66.28 | 66.91 | 67.63 | 68.20 | 68.31 | 68.04 | 68.15 | 68.31 | 68.15 | | **ECE ($\downarrow$)** | 19.71 | 19.72 | 14.60 | 12.82 | 11.27 | 10.58 | 9.51 | 9.47 | 9.25 | 8.75 | 8.75 | | **NLL ($\downarrow$)** | 0.84 | 0.8617 | 0.7355 | 0.6971 | 0.6766 | 0.6620 | 0.6395 | 0.6313 | 0.6233 | 0.6213 | 0.6215 | We can see that in general, the performance improves with more MC samples, but saturates after around $N=10$. **Q6. Why is Laplace performing so poorly on ARC-C dataset? The original Bayesian LoRA paper it seems to perform much better on that dataset.** Please refer to **Q2. [Reproducing LAP's Results on the ARC-C Dataset]** in **General Response**. **Q7. Line 236, MLE and MAP are not uncertainty estimation methods.** We apologize for the confusion. We will remove "uncertainty estimation" when describing MLE and MAP in the revision as suggested. **Q8. I don't entirely agree with the claim made in Lines 46-47. The possible suboptimal estimation of Laplace... never know if the complex NN has really converged to MAP; (2) setting prior precision...** This is a good point. We agree that these are also major reasons that lead to Laplace approximation's poor performance. We will include these two additional reasons in our revision as suggested. **Q9. Notation issue and missing Bolding** We are sorry for these typos and will correct them in the revision as suggested. --- Rebuttal Comment 2.1: Comment: Thank you for the rebuttal, it addresses most of my concerns. Still, I am not fully convinced that a Theorem is needed to state results in Theorem 3.1 and Theorem 3.2, maybe this is because I come from a more Bayesian background and thus the results do not look surprising. On a side note, I recently came across an ICML 2024 paper "Variational Learning is Effective for Large Deep Networks". In essence, they do natural gradient descent in natural parameter space for mean-field VI, and they make it work for large-scale neural networks including GPT2. This is highly relevant to the paper so I strongly suggest the author to discuss it in the paper. --- Reply to Comment 2.1.1: Title: Thank You for Your Further Feedback Comment: Thank you for your continued feedback and for keeping the communication channel open. We are glad that you found our response helpful and that it addressed most of your concerns. Below we address your remaining comments in detail. **W3.1 [the Term "Theorem"]** We appreciate your insightful feedback. In light of your suggestion, we will revise Theorem 3.1 and Theorem 3.2 to Proposition 3.1 and Proposition 3.2, respectively. We believe this change can more accurately reflect the nature of our contributions. Nonetheless, we maintain that the discussion presented here offers valuable insights to the broader community. The theoretical foundation of Asymmetric Bayesianization laid out in this work can serve as a springboard for the development of new methods in the field. In addition, inspired by your comments, we plan to reorganize Section 3.1 to enhance clarity. We will start with the calculation of the full weight matrix $W\_{ij}=W\_{0,ij} + \sum\_{k=1}^r B\_{ik}A\_{kj}$ (i.e., Eqn. 5 of the paper) and subsequently introduce the advantages of Asymmetric Bayesianization over bayesianizing both $A$ and $B$. **[Related Work]** Thank you for bringing the ICML 2024 paper "Variational Learning is Effective for Large Deep Networks" to our attention. It was not available at the time of our submission. We agree that it is highly relevant to our work, and we will ensure it is cited and discussed in the revised version. Finally, we would like to express our sincere gratitude once again for your insightful and constructive comments, which have greatly helped improve our paper.
Summary: Proposes to have a Bayesian version of LoRA by placing priors over the low rank matrices. This evolves into placing prior over one of the low rank matrices. Inference is via variational inference. Results are mixed, though it shows the method is competitive with others. Strengths: - The eventual simplicity of the approach by placing prior over only one of the low rank matrices. - Use of independent Gaussians in prior and posterior for efficiency. - The method is competitive, based on the results. Weaknesses: The flow of the paper is rather poor, and the presentation of the method overly complicated. I prefer the authors to simply start from (5) and then say the deltas to the pre-trained weights are Gaussians because they are sum of Gaussians. The consideration to also put prior on $B$ can be deferred to discussions, and the trade-offs discussed there. This make the model presentation cleaner, and avoids the problem of having to involve improper priors in Theorem 3.1 (since B is low rank, $\Sigma_{q}$ is low rank, the Gaussian is degenerate) and Theorem 3.2, and the subsequent problem of reader having to think about how the KL with generate Gaussians would work out in theory. In short, the theorems are unnecessary and need fixing because of improper distributions. 1. Line 137: What is it that requires accurate estimation? 2. Section 4: How is more than one sample (for N>0) used in generating a single output for evaluation and then influence the evaluation metric? *Minor* - Eq (3) to remove the $\min_{\theta}$ prefix, and to introduce the prior and likelihood. - Section 4: Shouldn't ECE and NLL be the metrics in focus because this is about "Bayesian"? - Section 4: The numbers in the table does not justify saying BLoB is the best. - "Bayesianization" is a mouthful --- can we think of a better derived word? Technical Quality: 4 Clarity: 1 Questions for Authors: Q1 and Q2 above (in weaknesses) Confidence: 5 Soundness: 4 Presentation: 1 Contribution: 3 Limitations: No. Paper need to include some discussion on the limitations of their approach/paper. I do not see any potential negative societal impact to be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and insightful questions. We are glad that you find our proposed methodology ``"competitive"`` with ``"simplicity"``, ``"efficiency"``, and ``"good contribution"``. Below we will address your questions in detail. **W1. The flow of the paper is rather poor, and the presentation of the method overly complicated. I prefer the authors to simply start from (5) and...** We sincerely appreciate your valuable feedback on the organization of our paper. We will make every effort to incorporate your suggestions to ensure that our model presentation is clearer and more understandable. **W2. ...the problem of having to involve imprior priors... In short, the theorems are unneccessary and need fixing because of improper distributions.** This is a good question. **Degenerate Gaussians.** Note that in our context, **degenerate Gaussian distributions are still valid for probabilistic inference**, as demonstrated by Schoeman et al. [1] (Eq. 12, 16-18), because + their probability density is well-defined, and + the KL divergence between two degenerate Gaussians is computable. Due to space constraint, we provide detailed discussion in a separate **Official Comment** titled **Degenerate Gaussians are Proper for Probabilistic Inference** below this **Rebuttal**. **Necessity of Theorems 3.1 & 3.2.** Our Theorems 3.1 & 3.2 are necessary because they + shows that with a proper $\tilde{R}$, one can compute the KL divergence for the high-dimensional full weight vec($W$) simply by computing the KL divergence for $A$, which is much lower-dimension, more parameter-efficient, more memory-efficient, and faster, and + investigate important theoretical properties of our BLoB, offering insights into its underlying assumptions and advantages. We will include the clarification above in our revision as suggested. [1] Schoeman, Johannes Cornelius, Corné E. van Daalen, and Johan A. du Preez. "Degenerate Gaussian factors for probabilistic inference." IJAR 2022. **W3. Eq (3) to remove the $\min_\theta$ prefix, and to introduce the prior and likelihood.** This is a good suggestion. We will remove the prefix in the revision. **W4. Section 4: Shouldn't ECE and NLL be the metrics in focus because this is about "Bayesian"?** Yes, ECE and NLL are indeed the primary metrics for evaluating LLMs' uncertainty estimation. However, we must also consider the model's accuracy to ensure a balance between uncertainty estimation and predictive performance. For example, a random classifier could achieve perfect calibration (zero/optimal ECE) and potentially lower NLL than a poorly calibrated model. Thus, while prioritizing NLL and ECE, it is also crucial to maintain ACC comparable to MLE and MAP baselines. **W5. Section 4: The numbers in the table does not justify saying BLoB is the best.** For the performance of BLoB ($N=10$), in the ID experiments in Table 1, it achieves the best ECE and NLL performance on almost all datasets, with similar ACC compared to MLE and MAP. In the OOD experiments in Table 2, BLoB ($N=10$) achieves the best performance in 7 out of 12 metrics across the four datasets, while the second-best LAP method achieves the best performance in only 4 metrics. The results clearly demonstrate that BLoB outperforms other baseline methods. **W6. "Bayesianization" is a mouthful --- can we think of a better derived word?** Thanks for mentioning this. We follow prior work to use "Bayesianization" as the noun form of "Bayesianize" [1]. This term accurately captures the process of transforming a deterministic component into a Bayesian one. [1] Bacharach, Michael, and Susan Hurley. "Issues and advances in the foundations of decision theory." Foundations of decision theory (1991): 1-38. **Q1. Line 137: What is it that requires accurate estimation?** The term "accurate estimation" refers to estimating $E_{A, B}[BAx]$. When both $A$ and $B$ are modeled as Gaussian posteriors, $E_{A, B}[BAx]=0$ holds before fine-tuning. However, accurately estimating this expectation requires an impractically large number of weight samples. In our BLoB approach, we model $B$ as deterministic and initialize it to 0 (similar to LoRA), avoiding this issue and leading to faster convergence with fewer samples. We will include the discussion above in the revision. **Q2. How does more than one sample (for N>0) used in generating a single output?** This is a good question. Ideally, a Bayesian neural network's output is the expected output $E_{q(W|\theta)}[P(Y|W,X)]$, where $X$ is the input, $Y$ is the output, and $q(W|\theta)$ is the approximate posterior of the parameters. In practice, we approximate this expectation through sampling: $E_{q(W|\theta)}[P(Y|W,X)]\approx \frac{1}{N}\sum_{n=1}^{N} P(Y|W_n, X)$, where $W_n\sim q(W|\theta)$ is the n-th sample drawn from the approximate posterior, and $N$ is the total number of samples. **Q3. Limitations** Please refer to **Q1. [Discussion of Limitations]** in **General Response**. --- Rebuttal 2: Title: Degenerate Gaussians are Proper for Probabilistic Inference Comment: In this separate Official Comment, we will elaborate on two key reasons why degenerate Gaussian distributions are valid for probabilistic inference: 1. **Their probability density is well-defined.** According to Schoeman et al. [1], the *probability density* of a degenerate Gaussian distribution can be factorized into the product of the density of a non-degenerate Gaussian distribution in a lower-dimensional linear subspace and a Dirac delta function (Eq. 12 in [1]). This density function has an alternative expression in the form of limit (Eq. 16-18 in [1]), which aligns with our main theorem 3.1 and 3.2: $$ p(x) = \lim_{a\rightarrow 0}\mathcal{N}(x | \mu, Q(\Lambda^{-1} - aI)Q^T + a I) = \mathcal{N}(x | \mu, \Sigma=\lim_{a\rightarrow 0} Q(\Lambda^{-1} - aI)Q^T + a I). $$ Here $\Lambda^{-1}$ is the precision matrix of the lower-dimensional non-degenerate Gaussian distribution, and $Q$ is the linear transformation matrix that expands this to high dimensions. In our main paper, we use the simplified notation $\mathcal{N}(x | \mu, \Sigma)$ for readability, omitting the limit. The full limit notation and proof appear in Appendix A.1. We will clarify this simplification in our revision. 2. **The KL divergence between two degenerate Gaussians is computable under certain conditions.** Secondly, the *KL divergence* between two degenerate Gaussians can be computed under the condition that they "have support on the same lower-dimensional manifold" (Eq. 44 in Schoeman et al. [1]). This condition has also been clearly stated as satisfied in our Theorem 3.2 ($RR^{T} = BB^{T}$, line 177), and we also include a detailed derivation of how we reach this condition in Appendix A.1 (line 676-678). [1] Schoeman, Johannes Cornelius, Corné E. van Daalen, and Johan A. du Preez. "Degenerate Gaussian factors for probabilistic inference." IJAR 2022. --- Rebuttal 3: Comment: W1 + W2. I understand that the method is sound because your degenerate Gaussians are on the same subspace. It is now clear to me that you also understand this constraint. I'll increase the soundness score, and the overall score. However, I still believe that the presentation can be vastly simplified (and hence the paper more accessible) by simply starting from (5). The relation to full Bayesianization can be deferred to discussion. W5.  It is "generally better" than the other methods. You can also say it is "the best on average" if you do some averaging either over the score or by counting wins. You cannot just say it is "the best". Q2. I see that the "experimental setup strictly adheres to the original LoRA [2] framework". It is good to say this explicitly in the paper. For ACC, do you generate, say N=10, and then count if *any* is correct? Is there some discount because you have N chances? --- Rebuttal Comment 3.1: Title: Thank You for Your Further Feedback Comment: Thank you for your further feedback and for keeping the communication channel open. We are glad that you found our response helpful and that it addressed your concerns. Below we address your remaining comments in detail. **W1+W2 [Presentation]**: Thank you again for your suggestion regarding the presentation. We will re-organize the presentation to start with Eqn. 5 in the revision as you suggested. **W5 ["Generally Better"]**: Thank you for your suggestion. We will adjust the claim from "the best" to "generally better" to more accurately reflect our contribution. **Q2 [ACC and N=10 Samples]**: Thank you for your recognition and insightful question. For ACC, we first draw N=10 samples, i.e., N=10 output probabilities, from our BLoB, and then compute the average of these N samples as *one* single final prediction. This approach aligns with the method we previously described: "approximate this expectation through sampling: $E_{q(W|\theta)}[P(Y|W,X)] \approx \frac{1}{N} \sum_{n=1}^{N} P(Y|W_n, X)$." Based on this averaged output probability (i.e., the approximation of the expectation), we then calculate the ACC. The ACC is *not* calculated based on whether *any* individual sample is correct. Instead, N=10 samples are aggregated to make *only one prediction*. Therefore there is *no discount* needed. We will follow your suggestion to provide further clarification on the output of variational inference and clarify that "experimental setup strictly adheres to the original LoRA [2] framework" in our revised version. Last but not least, we would like to thank you again for your insightful and constructive comments. They have greatly help improve our paper. --- Rebuttal 4: Title: Improved Outline of the Method Comment: Thank you for your valuable suggestions on enhancing the presentation quality of our work. Following your suggestions, we have refined Sections 3.1 and 3.2. Below is the outline of our updated version: **Section 3.1: Low-Rank Variational Approximate Posterior Distribution** 1. Main Method: Introduction of Asymmetric Bayesianization Scheme - Calculating each entry of the full weight matrix: $W\_{ij}=W\_{0,ij} + \sum\_{k=1}^r B\_{ik}A\_{kj}$ (i.e., Eqn. 5 of the paper) - $A$ modeled by independent Gaussians: $P(A|\theta)=\mathcal{N}(A|M,\Omega^2)$ - $B$ modeled as deterministic values 2. Presentation of Theorem 3.1 - Statement: Asymmetric Bayesianization corresponds to assuming a low-rank Gaussian variational distribution for the full weight $W$ 3. Discussion on the Choice of Asymmetric Bayesianization - Advantages over Bayesianizing both $A$ and $B$: + Stable training at the early stage + Faster convergence of parameters during training + Lower memory cost **Section 3.2: Low-Rank Prior Distribution** 1. Presentation of the Prior Distribution on the Low-Rank Component $A$ 2. Corresponding Prior Distribution on the High-Dimensional Space of the Full Weight Matrix $W$ 3. Presentation of Theorem 3.2 - Statement: Full-weight KL divergence can be efficiently computed on the low-rank component 4. Dedicated Remark on the Legitimacy of using Degenerate Gaussians for Probabilistic Inference (as in Our Previous Discussion) Thank you once again for your suggestions. We believe this revision will make the paper more accessible for our audience and help address any potential areas of confusion.
Summary: This work introduces a Bayesian Deep Learning framework for finetuning LLMs with probabilistic LoRA. Unlike existing work by Yang et al. [1] which uses a Laplace approximation, the variational distribution is parameterized using diagonal Gaussians as done in [2] (”Bayes by Backprop”). Specifically, the authors design the variational distribution as follows: Given the standard LoRA updates $W = W_0 + BA$, only $A$ is modeled probabilistically while $B$ is deterministic. In combination with an appropriate choice of prior, this allows for a a low-rank parameterization of the variational posterior and fast KL-computation in the lower dimensional parameter space. Experiments compare the methods to existing baselines in terms of accuracy of the mean predictions, calibration (ECE) and NLL on a range of datasets. [1] https://arxiv.org/pdf/2308.13111 [2] https://proceedings.mlr.press/v37/blundell15.pdf ____ I have updated my overall score +1 after the rebuttal. Strengths: The problem of assigning principled uncertainties to (fine-tuned) LLM outputs is important, and it makes sense to compare different Bayesian Deep Learning strategies in combination with LoRA. The exposition of the method and paper in general is clear. Weaknesses: Novelty is limited compared to Yang et al. [1], which implements a very similar idea via Laplace approximations instead of the Bayes-by-Backprop variational posterior. Additionally, the method by Yang et al. ("LAP") is not replicated successfully in Table 1. For example, for the LAP method on ARC-C datasets the results in this work fall behind the results in [1] by 50%: - This work: Accuracy: 29.73±12.02, ECE: 14.24±1.65 and NLL: 1.53±0.01 - [1]: Accuracy: 66.91±1.1, ECE: 7.5±1.2 and NLL: 0.86±0.02, respectively. This makes it difficult to gauge how much the approach improves compared to [1]. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: - Table 1: As $N \rightarrow \infty$, would we expect the accuracy to converge to those obtained with $N=0$ (mean prediction)? Is BLoB an unbiased estimator of the mean prediction? - l. 240: “For MCD, we use an ensemble of 10 LoRAs with a dropout rate of p = 0.1”. Why is an ensemble used here? Shouldn’t the uncertainties come from the dropout variational posterior? Why are multiple models necessary? - Will you open source the code on publication? Suggestions & Typos: - l. 33: I.e., unable to … (sentence is not grammatical, delete)
 - l. 110: yielded by parameterization —> yielded by reparameterization 
 - Algorithm 1, line 11: Updater —> Update
 - l. 237: Maximize A Posteriori —> Maximum A Posteriori 
 - l. 239: “For MAP, we use a weight decay rate of 1e-5.” I suggest to formulate this in terms of prior variance instead.
 - l. 638 covaraince —> covariance Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No, I recommend a more explicit treatment of the limitations, perhaps in a dedicated section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and insightful questions. We are glad that you find the problem we address ``"important"``, our empirical evaluation of BDL on LoRA ``"make sense"``, and our paper ``"clear"``. Below we will address your questions in detail. **W1. Novelty is limited compared to Yang et al. [1], which implements a very similar idea via Laplace approximations instead of the Bayes-by-Backprop variational posterior.** Please refer to **Q3. [Novelty of BLoB]** in **General Response**. **W2. The method by Yang et al. ("LAP") is not replicated successfully in Table 1.** Please refer to **Q2. [Reproducing LAP's Results on the ARC-C Dataset]** in **General Response**. **Q1. As $N\rightarrow \infty$, would we expect the accuracy to converge to those obtained with $N=0$ (mean prediction)? Is BLoB an unbiased estimator of the mean prediction?** This is a good question. Ideally, a Bayesian neural network's output is the expected output $E_{q(W|\theta)}[P(Y|W,X)]$, where $X$ is the input, $Y$ is the output, and $q(W|\theta)$ is the approximate posterior of the parameters. In practice, we approximate this expectation through sampling: $E_{q(W|\theta)}[P(Y|W,X)]\approx \frac{1}{N}\sum_{n=1}^{N} P(Y|W_n, X)$, where $W_n\sim q(W|\theta)$ is the n-th sample drawn from the approximate posterior, and $N$ is the total number of samples. When $N=0$, we use the mean of the weights to make predictions $P(Y|E_{q(W|\theta)}[W], X)$. Due to the *nonlinearity* of neural networks $P(Y|W,X)$, as $N \rightarrow \inf$, the average output converges to the expectation, but typically differs from the mean weight prediction. Thus, + the sample mean BLoB uses, $E_{q(W|\theta)}[P(Y|W,X)]\approx \frac{1}{N}\sum_{n=1}^{N} P(Y|W_n, X)$, is an unbiased estimator of $E_{q(W|\theta)}[P(Y|W,X)]$, but + BLoB is not an unbiased estimator of the mean prediction $P(Y|E_{q(W|\theta)}[W], X)$, and the mean prediction is not an unbiased estimator for the Bayesian prediction $E_{q(W|\theta)}[P(Y|W,X)]$ either. To summarize, when $N \rightarrow \infty$, $\frac{1}{N}\sum_{n=1}^{N} P(Y|W_n, X) \rightarrow E_{q(W|\theta)}[P(Y|W,X)] \neq P(Y|E_{q(W|\theta)}[W], X)$. To provide more context, we report the results for $N=0$ to $N=160$ on the WG-S dataset in the table below, demonstrating improved uncertainty estimation with increased number of samples but not converge to that of $N=0$. | | **N=0** | **N=1** | **N=2** | **N=3** | **N=4** | **N=5** | **N=10** | **N=20** | **N=40** | **N=80** | **N=160** | | ---------------------- | ------- | ------- | ------- | ------- | ------- | ------- | -------- | -------- | -------- | -------- | ------ | | **ACC ($\uparrow$)** | 71.44 | 65.20 | 66.28 | 66.91 | 67.63 | 68.20 | 68.31 | 68.04 | 68.15 | 68.31 | 68.15 | | **ECE ($\downarrow$)** | 19.71 | 19.72 | 14.60 | 12.82 | 11.27 | 10.58 | 9.51 | 9.47 | 9.25 | 8.75 | 8.75 | | **NLL ($\downarrow$)** | 0.84 | 0.8617 | 0.7355 | 0.6971 | 0.6766 | 0.6620 | 0.6395 | 0.6313 | 0.6233 | 0.6213 | 0.6215 | **Q2. Why is an ensemble used here? Shouldn’t the uncertainties come from the dropout variational posterior? Why are multiple models necessary?** We apologize for the confusion. By “ensemble of 10 LoRAs”, we meant to say “For MCD, we sample 10 times from the variational posterior distribution of LoRA with a dropout rate of $p = 0.1$ during inference." We will clarify this in the revision. **Q3. Will you open source the code on publication?** Thank you for your interest. The implementation of our method is straightforward and compatible with different LLMs. In fact, we have cleaned up the code, and will release the code upon acceptance of the paper. **Q4. Suggestions & Typos** We sincerely appreciate the reviewer's careful reading and for pointing out the typos. We will fix them in the revision as suggested. **Q5. Limitations** Please refer to **Q1. [Discussion of Limitations]** in **General Response**. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the additional details, especially the derivation and experiment in Q1. I see that $$E_{q(W|\theta)}[P(Y | W, X)] \neq P(Y | E_{q(W|\theta)} [W], X).$$ Follow up: What's the intuition behind the mean prediction (RHS) achieving higher accuracy than the (approximate) Bayesian prediction (LHS), also for large $N$? Overall, the rebuttal partly addresses my concerns around novelty (Q2 and Q3 in "General response"), hence I increase my overall score by one point. --- Reply to Comment 1.1.1: Title: Thank You for Your Further Feedback Comment: Thank you for your further feedback and for maintaining an open line of communication. We are glad that you found our response helpful and that it addressed your concerns. Below we address your follow-up comment on Q1. **[Higher Accuracy of Mean Prediction Compared to Bayesian Prediction]** This is a good question. By abandoning the modeling of the posterior distribution, mean prediction sacrifices some degree of calibration in exchange for improved accuracy. This empirical trade-off between accuracy and calibration has been noted in [3]. We will include this discussion in our revision. Lastly, we would like to express our gratitude once again for your insightful and constructive comments. They have significantly contributed to the improvement of our paper. [3] Stengel-Eskin, Elias, and Benjamin Van Durme. "Calibrated interpretation: Confidence estimation in semantic parsing." Transactions of the Association for Computational Linguistics 2023
Rebuttal 1: Rebuttal: # General Response We thank all the reviewers for their valuable and constructive comments. We are glad that they like our idea (Adn3) and find the problem we address ``"important"`` (Lm4V), our paper ``"clear"``/``"well written"/"introduced in an adequate pace and order"`` (Lm4V, AGd6), our method ``"nontrivial"``/``"necessary"``/``"promising"`` (AGd6, Adn3) as well as ``"justified"`` by theoretical and empirical arguments (AGd6), and our experiments showing that our method is ``"competitive"``/``"promising"``, of ``"efficiency"``/``"good performance"`` with ``"consistent gains over baseline approaches"`` (Adn3, Zor8, jN5u). Due to space constraint (6000 characters), we cannot cover all questions, but we promise to address all questions and cite all related references in our revision. Below we address reviewers' common questions one by one. **Q1. [Discussion of Limitations] (Lm4V, Zor8, Adn3, AGd6)** This is a good suggestion. We will add a Limitations section in the revision, which is summarized below: + Currently, BLoB is not suitable for training-free tasks (direct operation on LLMs during inference), which are interesting future work. + As the common disadvantage of VI algorithms, BLoB also requires sampling $N$ times during inference. + With a limited number of samples, the presence of unavoidable noisy samples can affect the algorithm's performance. Sampling a computationally acceptable number of times (e.g., $N=10$) can mitigate this issue. **Q2. [Reproducing LAP's Results on the ARC-C Dataset] (Lm4V, Adn3)** Our experimental setup strictly adheres to the original LoRA [2] framework, which has the **same hyperparameters** for **all datasets and methods** (learning rate, fine-tuning iterations, etc.). This widely adopted approach in LoRA-based research enhances our work's relevance and potential impact. Yang et al. [1] (LAP) employ a different setup in their original paper, incorporating early stopping to ensure a reasonable MAP for subsequent LA. However, this often results in reduced accuracy (over 1% deficit), prompting our decision to diverge from their setting. Nonetheless, we have diligently reproduced Yang et al. [1] within our framework, utilizing exactly their released code, with exactly the same hyperparameter configurations. However, we were unable to find a set of hyperparameters that consistently works for LAP across all datasets, leading to its poor performance on the ARC-C dataset. To achieve competitive performance with LAP on the ARC-C dataset, we deviated from the unified setting (i.e., the same hyperparameter configuration for all datasets), allowing LAP to have different hyperparameter configurations for different datasets. Note that this creates an unfair advantage for LAP. Specifically, we conducted an exhaustive grid search on 3 hyperparameters, resulting in the optimal configuration below: - Dropout rate: 0.1 - Learning rate: 5e-5 - Early stopping: At 5000th iteration (out of 10000) The table below shows the corresponding results (along with the original BLoB results for reference): | Metrics | LAP | BLoB ($N=10$) | | ---------------------- | ------------ | ------------- | | **ACC** | 66.78 (0.69) | **68.81 (1.09)** | | **ECE** | 16.25 (2.61) | **9.59 (1.88)** | | **NLL** | 1.03 (0.04) | **0.78 (0.02)** | Note that BLoB (under the unified setting) still outperforms LAP even when LAP has the unfair advantage of allowing different hyperparameter configurations for different datasets. This showcases our BLoB robust performance improvement. During our reimplementation, we observed that LAP is highly dependent on and sensitive to MAP estimation. It frequently fails, corroborating your previous observation about LAP's potential disadvantage: sub-optimal MAP convergence significantly impacts LAP's performance. We will include the discussion above in the revision as suggested. [1] Yang, Adam X., et al. "Bayesian Low-rank Adaptation for Large Language Models." ICLR 2024. **Q3. [Novelty of BLoB] (Lm4V)** While using Bayesian methods to address inaccurate uncertainty estimation in neural networks is not new, it is a **nontrivial** contribution to apply these techniques effectively to Large Language Models (LLMs) and demonstrate their practicality. Laplace Approximation (**LA**) and Variational Inference (**VI**) are **two major approximate Bayesian inference paradigms**. While Yang et al. [1] validated **LA**'s effectiveness for LLMs, **VI** remains unexplored in this context. Our BLoB, as **the first representative VI approach on LLMs**, addresses this major research gap by demonstrating performance comparable or superior to LAP. As Reviewer AGd6 noted, while ``"the proposed algorithm seems like a straightforward combination of LoRA and BBB, the paper mentions that certain modifications are necessary for the algorithm to work in practice. These modifications are a core part of the paper's contribution, and they are discussed thoroughly and justified with a mix of theoretical and empirical arguments."`` To summarize our BLoB's novel contributions: + BLoB is, to our knowledge, the first VI method for LLM fine-tuning; it demonstrates effective uncertainty estimation across diverse datasets. + We provide theoretical analysis, proving the feasibility of optimizing the full-weight variational distribution in the low-rank space of weight update matrices, supported by empirical evidence of its effectiveness and efficiency. + To address BBB's consistent failures, our BLoB introduces: + a novel approximate posterior parameterization method, enabling fast convergence and accurate uncertainty estimation within limited training iterations, and + a novel KL re-weighting scheme that effectively balances data likelihood and model complexity during training. We will include the discussion above in the revision as suggested. Pdf: /pdf/b0888858e43b9584ccf2f82653af3b95a0ecec86.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Active Classification with Few Queries under Misspecification
Accept (spotlight)
Summary: This paper considers active learning of halfspaces with noise that is both computationally efficient and query efficient without distributional assumption on X. Since it is known such problem is "hard" in the standard label query paradigm, it considers a new query model called "threshold statistical queries" (TSQ) where given a function $\phi$, set $S=\{ x_1, .., x_n \} $, and threshold $\tau$, the oracle answers if $\sum_{x_i \in S}\phi(x_i, y_i) > \tau$. Under Massart noise, it gives an active learning algorithm that is both computationally efficient and query efficient with this query model. On the other hand, it shows that under adversarial noise, this noise model cannot lead to query efficient learning. Strengths: - This paper considers a niche but still quite relevant theory problem of active learning for halfspaces. Its results are interesting: it shows that the newly proposed TSQ query model allows for efficient learning of halfspaces under Massart noise without distributional assumptions, while it is not strong enough to resolve the problem under the adversarial noise. - The proposed TSQ query model is a nice and novel generalization/modification of previous models (statistical queries, region queries). The techniques for both the upper bound and lower bound look non-trivial and novel to me. - The paper is written clearly. It provides a comprehensive review of related work and techniques, and clear high-level intuition behind the main results. - The paper is sound, though I did not check proofs in Appendix. Weaknesses: - The label complexity bound in Theorem 1.4 is cubic in both d and log(1/epsilon). It would be interesting to see if this can be improved, but this would be a very minor issue. - It would be more interesting if the authors could comment on how/whether the TSQ query model can make improvement in other noise models, or problems with a general (beyond linear) learning space. Technical Quality: 4 Clarity: 3 Questions for Authors: N/A Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our work. Improving the cubic term in the label complexity is definitely an interesting open question. We will add a conclusion section in the future version of the work to introduce some potential future directions. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I will keep my score and support its acceptance.
Summary: This paper focused on the problem of active learning from enriched queries. In order to be abale to learn halfspaces without restrictive distribution assumptions, authors propose Threshold Statistical Query (TSQ) which genarizes the region query and the classic statistical learning model query. Using the proposed from of query, authors propose an algorithm to learn halfspaces in polynomial time, using only poly log(1/$\epsilon$) TSQ queries, under random classifcaiton noise and massart noise. Authors also proved the impossibility for the advasarial noise case. Strengths: - The proposed query has motivations easy to understand, and successfully derives the learning algorithm with carefully designed steps. - The paper is self-containment. Essential background and definitions are clearly presented. - The discussion on algorithm design and theory deriviation is solid and clear. Weaknesses: - The manusciprt seems to be not complete. It is not proper concluded. - An overal dicussion section on limitations may also improve the manuscript. - The presentation of algorithm design can be improved. - Although already demonstrated in section 2.2, intuitive introduction on how the strong learning algorithm is designed and how it uses the weak learning algorithm can be added at the beginning of section 2. Technical Quality: 4 Clarity: 3 Questions for Authors: - On the noise format, authors considers the persistent form of noise which keeps same among different query trials. - Is this the reason that existing methods cannot achieve the poly log(1/$\epsilon$) query complexity? - How the proposed algorithm would be influenced if changing the noise to give random answers at each query? How will it influence finding $\bar{x}$? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The limitation on assumptions and adversarial noise is addressed in introduction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback and thank the reviewer for carefully reviewing our manuscript. We will improve the presentation of the paper based on the suggestions and include a conclusion section with further discussion. Below is our response regarding the question on the noise format. >On the noise format, authors considers the persistent form of noise which keeps same among different query trials. Is this the reason that existing methods cannot achieve the $poly \log(1/\epsilon)$ query complexity? How the proposed algorithm would be influenced if changing the noise to give random answers at each query? How will it influence finding $x$? The same algorithm we developed works for the non-persistent setting where there is randomness in every query. This is because we construct queries using randomly selected examples. We will clarify this point in the manuscript. The reason that existing methods cannot achieve low query complexity is the noise and not whether it is persistent or not. As we mentioned in the introduction, a small amount of label noise could make every large region query useless as it would produce the same answer with high probability. The same would hold even if the noise is persistent. In fact, persistent noise is even more challenging as it prevents a learner from making repeated queries over a single example to figure out its underlying label, which in some settings could make the problem trivial. --- Rebuttal Comment 1.1: Comment: I thank author clearly address my question on the reason of infeasibility of existing methods. I also read other reviewers comments with more knowledge of the field and corresonponding author response. I would like to keep my score.
Summary: This paper extends the work on pool-based active learning from the realizable case to non-realizable settings: where the observed labels (or answers to active learning queries) can have noise. The paper focuses on learning half-spaces, a fundamental learning theory problem. Existing works on pool-based active learning have designed several query languages that enable learning half-spaces up to $\epsilon$ accuracy using $O(\log(1/\epsilon))$ queries. The algorithms in these works, however, break when there is noise in the responses – even due to benign noise models such as random classification noise. This work designs a new query language – threshold statistical queries – and shows that it enables learning of half-spaces up to $\epsilon$ accuracy with $\mathrm{poly}\log(1/\epsilon)$ samples even when the underlying labels have been corrupted by certain the Massart noise model (which is a significant extension of the random classification noise model). To complement this result, the authors show that threshold statistical queries are not sufficient to learn halfspaces (or even 1-d thresholds) up to $\epsilon$ accuracy with fewer than $O(1/\epsilon)$ samples in the stronger agnostic learning model. Finally, if instead of the usual $OPT+\epsilon$ guarantee, one considers algorithms with $O(OPT)+\epsilon$ guarantees, then the authors show that it is possible to learn halfspaces with $\tilde{O}(\log(1/\epsilon))$ samples. Strengths: Designing new types of queries that enable learning with exponentially fewer samples than necessary for PAC learning is an important area of active work. This paper studies a central problem in this area: learning halfspaces. As far as I understand, this is the first work to propose a type of query that enables learning halfspaces up to accuracy $\epsilon$ with $\mathrm{poly}\log(1/\epsilon)$ samples when there can be some noise in the answers. The types of noise models considered are also natural and widely studied in standard PAC learning settings. While I am familiar enough with this area, to comment on the novelty of the techniques, the approach and the tools used (e.g., Forster’s transform) seem natural. Something that can potentially be an added strength is if there is hope to use threshold statistical queries (introduced in this work) for learning other hypothesis classes, even non-efficiently: for instance, instance intersections of halfspaces or (non-axis-aligned) boxes. It would be great to have some discussion on this. Weaknesses: One weakness of the paper is the presentation: while the paper was largely clear and not hard to follow, some sentences and notation are a bit weird, but I am sure the paper would read much better after another pass. Some specific notation-related suggestions/comments: 1. The notation B_1^k used in Theorem 2.1 is non-standard but is used before it is defined. Perhaps, till this notation is introduced, it can be avoided or explained in a footnote. 2. Definition 1.1 is a bit confusing. Specifically the sentence “Each query q : 2S×{±1} → {0, 1} …number in {0, 1}.” I am not sure what “given unknown labels as input” means. 3. I think using \cdot for the inner product is non-standard. Maybe it is better to use the $x^\top y$ notation? 4. Definition 3.1 and some subsequent places use the notation “S =< Sa, Sb >” to denote a set $\{S_a,S_b\}. Perhaps using braces is better? Technical Quality: 4 Clarity: 3 Questions for Authors: **Question 1.** The related works section mentions that Theorem 1.4 (the main/first result) can be implemented with standard statistical queries, as opposed to the new, threshold statistical queries introduced in this work. This is a bit confusing to me: if statistical queries are sufficient, then why is the model of threshold statistical queries introduced for Theorems 1.4 and 1.5 – the first two results? (Since Theorem 1.5 shows an impossibility result for learning with $\mathrm{poly}\log(1/\epsilon)$ threshold statistical queries and, further, since statistical queries are a special case of threshold statistical queries, this result should also hold for statistical queries.) If both of these results do indeed work with statistical queries, then I think 1. this should be emphasized and 2. the definition of threshold statistical queries can be delayed till the last result. **Question 2.** The results in this paper work for origin-centered half-spaces, instead of all halfspaces. Is this correct? I do not think this is a major concern as studying origin-centered halfspaces is a standard first step toward developing algorithms for half-spaces. But I think, if my claim is correct, then this should be clarified early in the introduction. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Please see my comments in weakness and questions sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our work and providing many constructive suggestions. We will improve the presentation in the future version of the manuscript. We think exploring the power of TSQ for other hypothesis classes such as the intersection of halfspaces or high dimensional boxes can be very promising directions and we will add more discussion on related directions in the future version of the manuscript. Below we respond to the comments and questions from the reviewer. >Question 1. The related works section mentions that Theorem 1.4 (the main/first result) can be implemented with standard statistical queries, as opposed to the new, threshold statistical queries introduced in this work. This is a bit confusing to me: if statistical queries are sufficient, then why is the model of threshold statistical queries introduced for Theorems 1.4 and 1.5 – the first two results? If both of these results do indeed work with statistical queries, then I think this should be emphasized and the definition of threshold statistical queries can be delayed till the last result. We want to remark that a high-level goal of this paper is to understand the power of queries for efficient learning with noise. Toward this goal, the query model studied should be simple but general enough to capture many existing query strategies. In this paper, we show that TSQ is an interesting general model that incorporates different query languages in the literature such as SQ with bounded precision, equivalence queries, region queries, label queries, etc. For Theorem 1.5, we want to mention that the return of a classic statistical query is a real number and the number of bits of information passed by such a query depends on the precision of the query. A statistical query with polynomial small precision can be implemented with a logarithmic number of TSQs and thus Theorem 1.5 also holds for SQs with bounded precision. However, as TSQ also captures other types of query languages that operate directly over samples, our lower bound in Theorem 1.5 shows that agnostic learnings are indeed intricate and to achieve a low query complexity, even more powerful queries are needed to come up with. On the other hand, TSQ is a very broad class of queries, and thus in some applications, the full generality of TSQs may not be feasible. An efficient learning algorithm using TSQs could in principle be optimized to use a subclass of TSQs with good structure that is easily implementable. For example, in Theorem 1.4, we use TSQs to find points $\bar{x}$ to feed Vaidya’s algorithm. As these queries are sampled from regions with non-trivial probability mass and estimating $\bar{x}$ up to a polynomial small error is enough, we can modify the algorithm by using the classic statistical queries with polynomial small precisions. On the other hand, in Theorem 1.6, the algorithm we use makes decisions based on both the statistical properties of the dataset and labels of individual examples, it cannot be implemented with solely SQ. We will expand these discussions and add more explanations together with examples to emphasize the relation between TSQ with other types of queries. >Question 2. The results in this paper work for origin-centered half-spaces, instead of all halfspaces. Is this correct? As our theorem is distributional-free(no distributional assumption is made), learning origin-centered halfspaces is equivalent to learning general halfspaces. This can be done by adding another dimension to the problem artificially. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for taking the time to respond to my questions. I think it would be good to include some discussion comparing TSQ and SQ queries -- expanding upon what authors write in the rebuttal. Thanks for answering Question 2. I will retain my rating and think the paper should be accepted.
Summary: This paper is concerned with active learning of halfspaces under persistent Massart noise, i.e., active learning under data $(X,Y)$ for which $\exists w^* \in \mathbb{R}^d, \eta \in [0,1/2)$ such that $P(Y = \mathrm{sign}(\langle w^*, X\rangle)) \ge 1-\eta$. The paper proposes a new query language for this task, "threshold statistical queries" (TSQs). A TSQ consists of a set $S \subset (\mathcal{X}\times \mathcal{Y})^m,$ a map $\phi$ from $\mathcal{X} \times \mathcal{Y}$ to the reals, and a threshold, a real number, $\tau$, such that in response to a query of the form $(\phi, S,\tau),$ the learner receives a feedback indicating if $\sum_{(x_i, y_i) \in S} \phi(x_i, y_i) > \tau$ or not. The main contribution of this work is the design and analysis of a new algorithm that active learns halfspaces in the described setting to error $\eta + \varepsilon$ with $O((d \log(1/\varepsilon)^3)$ TSQs and in polynomial time, without the use of structural assumptions on the feature distribution of the data. This method relies on an insightful way to use a small number of (simple) TSQs to both check if a vector $w$ approximates $\mathrm{sign}(\langle w^*, x\rangle)$ to $\eta + \varepsilon$-error _even under Massart noise_ over a set of $x$ that are isotropic and (near)-unit norm and satisfy $\{ |\langle w, x\rangle \ge 1/2\sqrt{\mathrm{dim}(x)} \},$ and if not, to build a witness for the same. This core dea is exploited to construct a convex feasibility oracle using a small number of TSQs via an existing method (Vaidya's Algorithm), which in turn is used to find an $\varepsilon$-neighbour of $w^*$ with a small number of TSQs and limited computation, at least over isotropic and near-unit-norm features. This is extended to general feature distributions by exploiting Forster's transform, as has appeared in recent work on learning halfspaces under noise. This result is complemented by a negative result that asserts that in the stronger model of agnostic learning (wherein some arbitrary $\eta$-fraction of the data can have labels that disagree with $\mathrm{sign}(\langlew^*, x\rangle)$, without the stochastic structure imposed by Massart's condition). This result states that in such a setting, even learning a singleton out of a size-$n$ domain to excess error $1/4n$ requires $\Omega(n)$ TSQs, and thus shows that in general, one cannot use fewer than $\Omega(1/\varepsilon)$ TSQs to learn such a class (which further has a natural implication for halfspaces). The proof uses a reduction to distributed learning that is quite interesting. Strengths: Active learning of halfspaces under Massart noise is a challenging problem and of deep interest to the theoretical ML community, and thus this paper is certainly pertinent to the audience of Neurips. To my reading, the results are correct. Prior work is discussed in impressive detail, and the investigation of the paper is well contextualised. Before offering more subjective comments, I do want to say that I am not an expert in this subfield of active learning, and my adjudication, especially of the novelty of the work, must be treated thus with care. I find the results of this paper very interesting. I think that the TSQ setup does not appear to be _too_ powerful from the get go, and fits fairly well with some of the mistake-based query structures studied in prior work. The result captures the highly nontrivial setting of learning halfspaces without feature distribution assumptions, and the method proposed is elegant and clever. The writing of the paper is also excellent, and communicates subtle ideas in a clear manner, making the approach seem deceptively simple. On the whole, I think this is a strong contribution to the literature. Weaknesses: I don't particularly see major weaknesses with the paper. However, I think that the main lacuna lies in the limited contextualisation of the a priori power of TSQs in the paper. While I appreciate the clear contextualiation of the investigation of query structures in active learning, I think the practicality of TSQs is not very clearly discussed in the paper. Mainly, the paper places discusses this in the context of mistake based queries, but clearly the TSQ setting is more onerous for a labeler, since they must compute $\phi$ over all examples in $S$ (rather than, e.g., simply declaring the first mistake they find). I think that a frank discussion of how the authors think this query structure may be executed in practice would strengthen the contribution. Technical Quality: 4 Clarity: 4 Questions for Authors: - Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: I think this is fine viz the social impact stuff, but I suppose the contextualisation of TSQs I mentioned above can be viewed as a limitation that should be discussed a little more. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reviewing our work and for the constructive feedbacks. Below is our response about the practicality of TSQs. > While I appreciate the clear contextualiation of the investigation of query structures in active learning, I think the practicality of TSQs is not very clearly discussed in the paper. Mainly, the paper places discusses this in the context of mistake based queries, but clearly the TSQ setting is more onerous for a labeler, since they must compute $\phi$ over all examples in $S$ (rather than, e.g., simply declaring the first mistake they find). Mistake-based query itself has already been useful in practice. For example, as mentioned in [BH12], mistake-based queries are used in Faces in Apple-iPhoto. From a theoretic perspective, TSQ is a natural generalization of previous query languages such as region queries, statistical queries, and label queries. In this paper, we did not focus on optimizing the structure of the queries we use in the algorithms. In fact, an efficient learning algorithm using TSQ may not exploit the full power of TSQ and thus depending on the application, one might be able to instead use other subclasses of TSQ that can be easily implemented. We believe that studying the tradeoff between the complexity of the query structure and the query complexity is very important and thus expect more future work done on designing simple and user-friendly TSQ for various learning problems. From an application perspective, we think learning with TSQ can also be used to formulate problems arising from practical applications as TSQ is a class of queries that can be computed in linear time. For example, there are many application such as sloving complicated tasks by interacting with LLMs, where a learning problem could be in general very hard to solve by an LLM, but human experts can break down the problems into sequential simple queries/questions such as TSQ that can be computed and verified fast by powerful models and thus can use LLM as a tool to solve complicated tasks. We will add more detailed discussions about these in the future version of the manuscript. >Reference >[BH12]Balcan, Maria Florina, and Steve Hanneke. "Robust interactive learning." Conference on Learning Theory. JMLR Workshop and Conference Proceedings, 2012. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications, I hope that this interesting discussion does make it to the final version. I will keep my score.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and effort in providing feedback. We are encouraged by the positive comments, and that all the reviewers appreciated the paper for the following (i) Novelty and interesting result (**VrwT,vqip,WAF8,DNzP**), (ii) technically clean and interesting (**VrwT, WAF8, DNzP**), (iii) clear presentation and well-motivated (**VrwT,vqip,WAF8,DNzP**) Below, we address the individual questions and comments by the reviewers separately.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On Sparse Canonical Correlation Analysis
Accept (poster)
Summary: The paper presents an efficient MISDP algorithm for sparse CCA. Strengths: Sparse CCA is an important and interesting problem. The analysis is useful for this area. Weaknesses: - Typos: optimiality -> optimality line 167 - Limitations are not fully addressed - No separate evaluation of sparsity in the experiments (see Questions) - Most experiments have running time of exactly 1 second, it would be clearer to show more decimal digits. Technical Quality: 3 Clarity: 3 Questions for Authors: - Your analysis is to compute the first pair of basis vectors, how does your algorithm work to compute multiple pairs? - Why is the MIPGap always 0 (except when timed out)? Does a dataset exist where it is not zero? - Related to above, can the authors expand on the limitations of the proposed method? - In experiments, to evaluate the different algorithms, you use the distance with the optimum. Do you also compare sparsity explicitly? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and feedback. In the following, we provide detailed responses to each of the issues raised by the reviewer. 1. [Q1: Typo] We will fix the typo in the revision. []() 2. [Q2: Limitations] The limitations mainly arise from two aspects: Algorithm 1 and the proposed MICQP are efficient but require low-rank data matrices, and the branch-and-cut algorithm does not scale to large instances. However, we respectfully believe that addressing these limitations fully may not be feasible, as SCCA is generally an NP-hard problem. To alleviate these limitations, we propose scalable greedy and local search algorithms without any precondition to approximately solve SCCA. They successfully yield a near-optimal solution to SCCA for all testing instances. Besides, in Table 1 in the PDF file, the supplementary experimental comparison demonstrates the superior scalability and high-quality outputs of our local search algorithm compared to existing SCCA methods. Therefore, we recommend using the approximation greedy and local search algorithms within the high-dimensional and full-rank data contexts. In practice, the branch-and-cut algorithm can quickly find a high-quality solution at the beginning of the iterations, but proving its optimality takes a long time. Hence, we recommend setting a short time limit for practical applications. Hope these mitigate the limitations well. []() 3. [Q3: About sparsity] This paper studies SCCA with $\ell_0$ norm constraints on the weight vectors $x$ and $y$, restricting the number of nonzero weights exactly to be no larger than the sparsity thresholds $s_1$ and $s_2$, respectively. Hence, our approximation and exact algorithms can guarantee the desired sparsity of SCCA. We have compared the proposed local search algorithm with the SCCA methods of [10, 32, 41, 37] in correlation value, sparsity, and running time. The computational results on synthetic, UCI, and breast cancer data are presented in Table 1 in the PDF file. Please note that we highlight the best correlation and sparsity results in bold. * Unlike the local search algorithm, these existing methods do not strictly enforce the exact sparsity requirement, i.e., the $\ell_0$ norm constraints. Consequently, the local search algorithm achieves the best sparsity for nearly all testing cases. Note that the performance of these existing methods highly relies on the penalty parameters. We have tuned the parameters using the ranges recommended in the literature and have reported the best results. Hope these explain us well. []() 4. [Q4: Time] We have presented two decimal digits to differentiate the running time of less than one second. Please refer to Tables 1 and 2 in the PDF file. []() 5. [Q5: Multiple SCCA] First, the multiple CCA problem can be formulated as follows: $$ \max_{x\in \mathbb{R}^{n\times k}, y\in \mathbb{R}^{m\times k}} \big\\{\text{tr}(x^{\top} A y): x^{\top} B x = I_k, y^{\top} C y = I_k \big\\}, $$ where $k$ denotes the number of pairs of basis vectors and $I_k$ denotes the identity matrix of size $k$. As $x, y$ can be matrices, we propose adding row sparse constraints to obtain multiple SCCA, which is defined as: $$ \max_{x\in \mathbb{R}^{n\times k}, y\in \mathbb{R}^{m\times k}} \big\\{\text{tr}(x^{\top} A y): x^{\top} B x = I_k, y^{\top} C y = I_k, \\|x\\|_0\le s_1, \\|y\\|_0\le s_2 \big\\}, $$ where we let $\\|x\\|_0$ and $\\| y\\|_0$ denote the number of nonzero rows of $x$ and $y$, respectively. The proposed multiple SCCA model can (i) compute the multiple weight vectors $(x, y)$ simultaneously and (ii) enforce the sparsity and orthogonality strictly. To be specific, the constraints $x^{\top} B x = I_k, y^{\top} C y = I_k$ ensure the orthogonal left- and right-canonical loading vectors in multiple SCCA. By the definition of row sparsity, the resultant multiple left- and right-basis vectors, i.e., the columns of $x$ and $y$, share the same nonzero rows, respectively. More importantly, the row-sparsity enables us to readily extend the proposed algorithms to solve multiple SCCA. We have tested them on UCI data, and the computational results are presented in Table 2 in the PDF file. As $k$ increases, it takes branch-and-cut a longer time to return an optimal solution. 6. [Q6: MIPGap & Branch-and-cut] MIPGap is a key performance measure for branch-and-cut, which is an iterative algorithm designed for solving mixed integer programming (MIP) to global optimality. By leveraging the proposed mixed-integer semidefinite program, we develop a customized branch-and-cut algorithm to exactly solve SCCA. Specifically, MIPGap is defined as "(UB – LB)/LB" at each iteration of branch-and-cut, where LB is the objective value computed from the best-known feasible solution, and UB denotes the best-known upper bound. However, branch-and-cut is hard to scale for large instances. Thus, we set a time limit of one hour. If MIPGap reaches 0 within one hour, branch-and-cut successfully yields an optimal solution. If not, we report the MIPGap to indicate how far the current solution is from optimality. In practice, the branch-and-cut algorithm can quickly find a high-quality solution at the beginning of the iterations, but proving its optimality takes a long time. Hence, we recommend setting a short time limit for practical applications. Besides, the complexity of branch-and-cut depends on the initial MIPGap, known as the root gap in the literature. We compute the root gap by using the local search algorithm’s output as LB and the convex relaxation value as UB. A small root gap often leads to a fast convergence of MIPGap to 0. Therefore, if the local search algorithm is near-optimal and the convex relaxation is tight, branch-and-cut will efficiently solve SCCA to optimality. In summary, the limitation of branch-and-cut is that it may struggle with solving large-scale instances especially when the root gap is large. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I will keep my score.
Summary: The paper proposes several algorithms for the Sparse Canonical Correlation Analysis (SCCA) problem. First, the authors present an exact semidefinite programming representation of the classic CCA problem. These results are then used to derive an equivalent combinatorial formulation of the Sparse CCA problem, which can be optimized using greedy and local search algorithms. For the case that the sample covariance matrix has low rank, the paper then derives a polynomial time algorithm to solve SCCA to global optimality. Next, the authors show the NP-hardness for the case where the cross-covariance matrix has rank one, and then discuss how the problem can be solved using two Mixed-Integer Convex Quadratic Programs. The authors then derive an equivalent mixed integer semidefinite programming reformulation for the SCCA problem, which can be simplified so that it allows them to develop a tailored branch-and-cut algorithm to solve SCCA to global optimality. Finally, the paper performs numerical evaluation of the proposed algorithms on synthetic data as well as several real-world datasets. It is shown that their approximate algorithms can solve small-to medium scale problems in a few seconds, while achieving the optimal value in most cases. In the rank-one case, they can globally solve problems up to size 200x200 within one hour. Strengths: The paper proposed several new algorithms for the sparse CCA problem in different scenarios. The authors prove the equivalence of their reformulations as well as the convergence of their exact algorithm for SCCA in the low rank case in O(n^3+m^3). In the experiments the approach is shown to be applicable to several synthetic and real-world datasets. Weaknesses: An experimental comparison of the approach to other state-of-the-art methods for sparse CCA is missing and should be added to the paper, e.g. the methods introduced by Witten et al. [41] or Parkhomenko et al.[37]. Technical Quality: 2 Clarity: 3 Questions for Authors: What are the results of the buzz dataset, which is included in the data description in Appendix E but not included in the results in Table 2? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: As also discussed by the authors, the proposed algorithms have certain preconditions such as low rank which does not make them applicable to all cases. Moreover, the exact branch-and-cut based algorithms do not scale to large datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their careful evaluation and valuable suggestions. We have addressed the reviewer’s comments below. 1. [Q1: Experimental comparison] We have compared the proposed local search algorithm with the SCCA methods of [10, 32, 41, 37] in correlation value, sparsity, and running time. The computational results on synthetic, UCI, and breast cancer data are presented in Table 1 in the PDF file. Please note that we highlight the best correlation and sparsity results in bold. * Unlike the local search algorithm, these existing methods do not strictly enforce the exact sparsity requirement, i.e., the $\ell_0$ norm constraints on variables $x, y$. Consequently, the local search algorithm achieves the best sparsity for nearly all testing cases. * More importantly, the local search algorithm yields a larger correlation value than these existing methods in 15 out of 22 testing cases. * Finally, the running time of the local search algorithm outperforms that of [10, 32, 37]. Note that the performance of these existing methods highly relies on the penalty parameters. We have tuned the parameters using the ranges recommended in the literature and have reported the best results. Hope these efforts are satisfactory. 2. [Q2: Buzz] We have included the numerical experiments for the buzz dataset, as presented in the last two rows of Table 2 in the PDF file. []() 3. [Q3: Limitations] In practice, the branch-and-cut algorithm can quickly find a high-quality solution at the beginning of the iterations, but proving its optimality takes a long time. Hence, we recommend setting a short time limit for practical applications. In addition, please note that our proposed greedy and local search algorithms are scalable and do not require preconditions. They yield a near-optimal solution to SCCA for all testing instances. The supplementary experimental comparison further demonstrates the superior scalability and high-quality outputs of our local search algorithm compared to existing SCCA methods, as seen in Table 1 in the PDF file. Hope these mitigate the limitations well. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your rebuttal. The authors reponded to my concerns and provided additional experimental results, comparing the proposed local search algorithm with the SCCA methods of [10,32,41,37], as also pointed out by other reviewers. They also added the results of the buzz dataset to the evaluation, even though the dimensions do not quite match, as in the table we have n=38 and m=39 but in the dataset description in the appendix it says n=33 and m=34. Could you please clarify this? Moreover, the authors also added further discussions on the limitations of the approaches. --- Reply to Comment 1.1.1: Title: Response Comment: We thank the reviewer for the comments. We apologize for the typo. As mentioned, the UCI dataset is split into the first $n$ variables and the remaining $m$ variables to construct the sample covariance matrices $A, B, C$. In the buzz dataset, there are 77 variables. Therefore, we have $n=38$ and $m=39$, instead of $n=33$ and $m=34$. We will fix the typo in the revision.
Summary: To enhance the interpretability of CCA, the authors explore sparse CCA. It is interesting to note that sparse CCA generalizes sparse PCA, sparse SVD, and sparse regression. The authors derive efficient algorithms to solve sparse CCA and perform the theoretical analysis for sparse CCA. The effectiveness of the proposed method is verified through numerical experiments. Strengths: 1.The paper is well-written and well-organized. 2.The paper works on sparse CCA and gives a derivation of an equivalent mixed-integer semi-definite programming model. The authors verify the effectiveness of the proposed method via numerical experiments. Weaknesses: I have several minor concerns and hope to hear from the authors, 1. The work in [10] also addresses the sparsity of CCA. Why not give experimental results on the reference [10] or other sparse methods? 2. In section 4, the authors give a mixed-integer semidefinite program. In toy examples, it is much better to discuss this model. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the above weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have explained some of the limitations in the article, and others can be seen from weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their careful evaluation and valuable suggestions. We have addressed the reviewer’s comments below. 1. [Q1: Experimental comparison] We have compared the proposed local search algorithm with the SCCA methods of [10, 32, 41, 37] in correlation value, sparsity, and running time. The computational results on synthetic, UCI, and breast cancer data are presented in Table 1 in the PDF file. Please note that we highlight the best correlation and sparsity results in bold. * Unlike the local search algorithm, these existing methods do not strictly enforce the exact sparsity requirement, i.e., the $\ell_0$ norm constraints on variables $ x, y$. Consequently, the local search algorithm achieves the best sparsity for nearly all testing cases. * More importantly, the local search algorithm yields a larger correlation value than these existing methods in 15 out of 22 testing cases. * Finally, the running time of the local search algorithm outperforms that of [10, 32, 37]. Note that the performance of these existing methods highly relies on the penalty parameters. We have tuned the parameters using the ranges recommended in the literature and have reported the best results. We hope these efforts are satisfactory. 2. [Q2: About MISDP] The mixed-integer semidefinite program (MISDP) is an equivalent formulation of SCCA. Below is a detailed discussion of the equivalence. Suppose $x, y$ is a pair of solutions to SCCA. * For each $i\in [n]$, $z_i$ is the binary characteristic variable of the entry $x_i$, i.e., $z_i=0$ if $x_i=0$ and $z_i=1$ if $x_i\neq 0$. Analogously, the binary variable $z_{j+n}$ corresponds to whether $y_j$ equals zero for each $j \in [m]$. Notably, the sparsity thresholds $s_1$ and $s_2$ result in the constraints $\sum_{i\in [n]} x_i \le s_1$ and $\sum_{j\in [m]} x_{j+n} \le s_2$ in MISDP. * For each $i\in [n+m]$, MISDP has the constraint $X_{ii} \le M_{ii} z_i$, implying that $X_{ii} =0$ if $z_i=0$. Since matrix $X$ is positive semidefinite, it is easy to check that the $i$th column and $i$th row of $X$ are all zeros if $z_i=0$. For example, suppose $n+m=2$. Then, given a solution $z_1=1$ and $z_2=0$, $X$ is a $2\times 2$ matrix and admits the form of $\begin{pmatrix} X_{11} & 0\\\\ 0 & 0 \end{pmatrix}$. * Therefore, the variable $X$ in MISDP is a sparse matrix and its zero entries is consistent with the vector $\begin{pmatrix} x\\\\ y \end{pmatrix}$, provided that the binary vector $z$ corresponds to $\begin{pmatrix} x\\\\ y \end{pmatrix}$. By leveraging Part (iii) of Proposition 1 and the above results, we can show that there is a one-to-one correspondence between $(X, z)$ of MISDP and $(x, y)$ of SCCA. This allows for the equivalence of MISDP and SCCA. We will add the discussion immediately after MISDP in the revision. Hope these explain our MISDP well. --- Rebuttal Comment 1.1: Comment: I thank the authors for clarifying some of the questions
Summary: This paper presented analyses on sparse canonical correlation analysis (SCCA) with optimization tools, including an SDP relaxation formulation. The authors discussed the theoretical properties and demonstrated numerical experiments. Strengths: The paper presented detailed theoretical developments supported by numerical evaluations. The presentation is generally easy to follow. Weaknesses: 1. The current manuscript focused mostly on the mathematical properties of SCCA with optimization tools. Though such analyses are valuable, it would be more helpful to the community if the authors could provide connections to learning practices/applications, i.e., how the algorithms help understand the real datasets. 2. Technically, it would be valuable to have more discussions on the covariance structure of the matrix, instead of treating them as $A, B, C$ blocks. For example, line 136 (observation 1) becomes a trivial fact if we treat CCA as finding the principal angle/vector of two subspaces, so the cosine of the principal angle cannot exceed one. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Sec 2.1, the **exact** *semidefinite programming (SDP) reformulation* seems to be in contradiction to the fact that this is a **relaxation** of the original problem. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and feedback. In the following, we provide detailed responses to each of the issues raised by the reviewer. 1. [Q1: Understand the real datasets] We have applied the local search algorithm to evaluate the performance of SCCA against different sparsity levels. Specifically, for a given dataset, we compute the ratio of correlations between SCCA and CCA for various $s_1, s_2$ parameters. Note that CCA leverages all $n, m$ variables by default and does not change with $s_1, s_2$. We test the real $\textit{spambase}$ data and synthetic data, and the results are displayed in Figure 1 in the PDF. This visualization provides insights for the maximum sparsity SSCA can achieve while maintaining the correlation of full data. For the real $\textit{spambase}$ data, SCCA almost recovers the correlation of CCA when $s_1\approx n/2$ and $s_2\approx m/2$, as seen in Figure 1(a) in the PDF. This also indicates the underlying sparse structure of the dataset. []() 2. [Q2: Covariance structure] As the reviewer suggested. We will add a remark to explain Observation 1. We are also inspired to expand on the discussion of Theorem 2 using the covariance structure, as detailed below. We will add this remark immediately after Theorem 2. It is worth noting that the covariance structure offers an intuitive explanation but not rigorous proof. Therefore, we will keep the original proof to ensure that Theorem 2 is proven rigorously and that the subsequent Algorithm 1 can be developed. >Specifically, if matrices $B$ and $C$ are of rank $r$ and $\hat{r}$, respectively, there are only $r$ and $\hat{r}$ linearly independent vectors in the subspaces corresponding to $B$ and $C$. Thus, the cosine of the principal angle can always be represented by these $r$ and $\hat{r}$ vectors. As a result, the weight vectors of CCA consist of only $r$ and $\hat{r}$ nonzero elements. []() 3. [Q3: About SDP] When an SDP relaxation yields the same optimal value as the original problem, we say it achieves the objective exactness, commonly termed exact SDP in the literature. According to Proposition 1, CCA coincides with its SDP relaxation. That is, we obtain an exact SDP reformulation of CCA. --- Rebuttal Comment 1.1: Comment: I have read through the rebuttal. Thanks for the detailed response and clarifications.
Rebuttal 1: Rebuttal: We thank the review team for their insightful comments and feedback, which significantly improve the quality of our paper. In the following, we provide detailed responses to the key issues raised by the reviewers. 1. [Experimental comparison] We have compared the proposed local search algorithm with the SCCA methods of [10, 32, 41, 37] in correlation value, sparsity, and running time. The computational results on synthetic, UCI, and breast cancer data are presented in Table 1 in the PDF file. Please note that we highlight the best correlation and sparsity results in bold. * Unlike the local search algorithm, these existing methods do not strictly enforce the exact sparsity requirement, i.e., the $\ell_0$ norm constraints on variables $ x, y$. Consequently, the local search algorithm achieves the best sparsity for nearly all testing cases. * More importantly, the local search algorithm yields a larger correlation value than these existing methods in 15 out of 22 testing cases. * Finally, the running time of the local search algorithm outperforms that of [10, 32, 37]. Note that the performance of these existing methods highly relies on the penalty parameters. We have tuned the parameters using the ranges recommended in the literature and have reported the best results. We hope these efforts are satisfactory. []() 2. [Limitations] The limitations mainly arise from two aspects: Algorithm 1 and the proposed MICQP are efficient but require low-rank data matrices, and the branch-and-cut algorithm does not scale to large instances. However, we respectfully believe that addressing these limitations fully may not be feasible, as SCCA is generally an NP-hard problem. To alleviate these limitations, we propose scalable greedy and local search algorithms without any precondition to approximately solve SCCA. They successfully yield a near-optimal solution to SCCA for all testing instances. Besides, in Table 1 in the PDF file, the supplementary experimental comparison demonstrates the superior scalability and high-quality outputs of our local search algorithm compared to existing SCCA methods. Therefore, we recommend using the approximation greedy and local search algorithms within the high-dimensional and full-rank data contexts. In practice, the branch-and-cut algorithm can quickly find a high-quality solution at the beginning of the iterations, but proving its optimality takes a long time. Hence, we recommend setting a short time limit for practical applications. Hope these mitigate the limitations well. []() 3. [Multiple SCCA] First, the multiple CCA problem can be formulated as follows: $$ \max_{x\in \mathbb{R}^{n\times k}, y\in \mathbb{R}^{m\times k}} \big\\{\text{tr}(x^{\top} A y): x^{\top} B x = I_k, y^{\top} C y = I_k \big\\}, $$ where $k$ denotes the number of pairs of basis vectors and $I_k$ denotes the identity matrix of size $k$. As $x, y$ can be matrices, we propose adding row sparse constraints to obtain multiple SCCA, which is defined as: $$ \max_{x\in \mathbb{R}^{n\times k}, y\in \mathbb{R}^{m\times k}} \big\\{\text{tr}(x^{\top} A y): x^{\top} B x = I_k, y^{\top} C y = I_k, \\|x\\|_0\le s_1, \\|y\\|_0\le s_2 \big\\}, $$ where we let $\\|x\\|_0$ and $\\| y\\|_0$ denote the number of nonzero rows of $x$ and $y$, respectively. The proposed multiple SCCA model can (i) compute the multiple weight vectors $(x, y)$ simultaneously and (ii) enforce the sparsity and orthogonality strictly. To be specific, the constraints $x^{\top} B x = I_k, y^{\top} C y = I_k$ ensure the orthogonal left- and right-canonical loading vectors in multiple SCCA. By the definition of row sparsity, the resultant multiple left- and right-basis vectors, i.e., the columns of $x$ and $y$, share the same nonzero rows, respectively. More importantly, the row-sparsity enables us to readily extend the proposed algorithms to solve multiple SCCA. We have tested them on UCI data, and the computational results are presented in Table 2 in the PDF file. As $k$ increases, it takes branch-and-cut a longer time to return an optimal solution. Pdf: /pdf/f32e0ad7aa08572e92fd7198449da55ea6e940db.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DEPrune: Depth-wise Separable Convolution Pruning for Maximizing GPU Parallelism
Accept (poster)
Summary: This paper proposed a pruning method target at the previously neglected Depth-wise Convolution layers which are widely used in vision models. This work is based on the Diagonal-wise Refactorization (DR) computation strategy used in GPU, and pruning one weight point in depth-wise convolution means convert the corresponding column into zero vector. To accelerate the inference of pruned models, the authors proposed to enhance the GPU performance with two hardware friendly methods: BWT and HSR, which achieves the load balance and align the memory access with GPU tile size. The evaluation result shows the proposed framework brings significant speedup in MobileNet and EfficientNet. Strengths: 1. The pruning method is based on the DR computation strategy, which provides the opportunity to convert the point pruning in depth-wise convolution layers into masking the corresponding columns as zeros. 2. To accelerate the inference processing, the proposed BWT and HSR method achieves loading balance among GPU kernels and align the computation with tile size, speeding up the GPU processing. 3. The evaluation result breakdowns the pruning method, BWT and HSR, demonstrating the contribution of each part. Weaknesses: 1. The overhead of pruning method is not analyzed. How much time does it need to decide pruning points, loading balance and the parameters in HSR? 2. Comparing with the gain in pruning ratio, the accuracy drop seems not negligible. Can we modify the algorithm to set the HSR constraints to align the memory access and computation with tile size in pruning phase, without need of recalibration? Does it help with accuracy? 3. Can you provide the evaluation on edge GPU? Technical Quality: 2 Clarity: 3 Questions for Authors: See weakness. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your support of our paper and the valuable feedback. We respond to your reviews as follows.[m-#] means manuscript's reference. **Weakness 1) [The overhead of pruning method is not analyzed. How much time does it need to decide pruning points, loading balance and the parameters in HSR?]** We would like to clarify few things that might not have been clearly expressed in the paper. The pruning, load balancing, and HSR mentioned are all required only during the pruning process (offline). Pre-processing (Pruning and fine-tuning) is conducted offline, while model inference is performed online. The processes you mentioned are overheads in the offline phase and do not affect model inference time. In Author-Rebuttal-Table.4, still this offline overhead accounts for about 0.6\% of the total pre-processing time in MobileNet-v3-small on ImageNet. The proportion of this overhead becomes even smaller if the fine-tuning epoch increases. We will provide detailed figures based on MobileNet-v3-small to Author-Rebuttal-Table.4. **Weakness 2) [Comparing with the gain in pruning ratio, the accuracy drop seems not negligible. Can we modify the algorithm to set the HSR constraints to align the memory access and computation with tile size in pruning phase, without need of recalibration? Does it help with accuracy?]** Yes, the algorithm can be modified as you suggested. However, the absence of recalibration does not significantly affect accuracy. Without the recalibration process, users must set the pruning ratio to match the recalibration results by considering the GPU tile size and each layer's size. Recalibration is an algorithm that automates this process. The main reason there is no significant change in accuracy is as follows: In Figure 7-(C), you can see that DEPrune-BH performs fine-tuning after step 4, which is the entire pruning setting process. Therefore, whether recalibration is performed or users manually set the pruning ratio by considering the tile size, both are processes before fine-tuning, resulting in almost no change in accuracy. **Weakness 3) [Can you provide the evaluation on edge GPU?]** Certainly. We conducted additional experiments on an edge GPU and measured the inference time before and after pruning for MobileNet-v2. The edge GPU that we used for the experiment is NVIDIA Jetson Orin Nano 8GB. In Author-Rebuttal-Table.5, the DEPrune-BH pruned model inference time shows a 2.48 times speedup in inference time than baseline (unpruned model) on edge GPU. Compared to GFS [m-50] pruned model, DEPrune-BH pruned model is approximately 1.62 times faster. From an accuracy perspective, there is no change because the weights are the same as those on the laptop GPU. --- Rebuttal Comment 1.1: Comment: Thank you for your clarification, which addresses my concerns. I would like to improve the score to 6.
Summary: This paper presents the Depth-wise Separable Convolution Pruning (DEPrune). DEPrune prunes point-wise convolution and depth-wise convolution (DW-conv). And it is optimized by considering and analyzing the computation of Depth-wise Separable Convolution on GPU. Experimental results validate the speedup of DSPrune. Strengths: 1. The proposed method supports both point-wise convolution and depth-wise convolution pruning. 2. The performance of the proposed method seems significant. Weaknesses: 1. Some references are incomplete, for example, [23][39]. 2. The proposed scheme is only evaluated on MobileNet-V2, MobileNet-V3, and EfficientNet-B0. How does this method work for other NN models? Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What does “X” and “O” mean in Figure 1? 2. The lower parts of Figure 1 seem misleading, which do not correspond to the DW-conv example in the upper half. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time you have taken to review our work and for the constructive feedback. If the reviewer allows us to revise this paper, we will try our best to improve the quality of this paper. **Weakness 1) [Some references are incomplete, for example, [23][39].]** We greatly appreciate your guidance on this issue. We will revise and incorporate your suggestions as follows. Additionally, we will double-check if other references are incomplete and address them accordingly. "[23] Krizhevsky, G. Hinton, et al. Learning Multiple Layers of Features from Tiny Images. Technical report, University of Toronto, 2009." "[39] P. K. Rao and S. Chatterjee. Weight Pruning-UNet: Weight Pruning UNet with Depth-wise Separable Convolutions for Semantic Segmentation of Kidney Tumors. Research Square 2021" **Weakness 2) [The proposed scheme is only evaluated on MobileNet-V2, MobileNet-V3, and EfficientNet-B0. How does this method work for other NN models?]** As the reviewer's suggestion, we conducted additional experiments with the ConvNeXt-Base model [1]. In Author-Rebuttal-Table.3, DEPrune-BH achieves a 0.46\% higher Top-1 accuracy compared to FP even with an additional 20\% pruning in DW-conv. DEPrune-BH is up to 3.3 times faster than the ConvNeXt-Base while achieving a Top-1 accuracy drop within 1.5\%. [1] A ConvNet for the 2020s, CVPR 2022 **Question 1) [What does “X” and “O” mean in Figure 1?]** We agree with the reviewer’s comment that the caption in Figure 1 is not clearly presented. The X and O symbols indicate the absence and presence of corresponding characteristics for each method. In the case of (a) Structured Pruning, this approach forms well-structured data patterns that can benefit performance when executed on GPUs. However, it is not a fine-grained method, as the pruning unit size can extend up to a 2D matrix, which is significantly larger than the pruning unit in DEPrune, the method we propose. Therefore, in the case of (b) DEPrune, it is marked as O for both fine-grained and structured characteristics. We will include these details in the final version of the paper’s caption to prevent any misunderstandings for readers. **Question 2) [The lower parts of Figure 1 seem misleading, which do not correspond to the DW-conv example in the upper half.]** We appreciate the reviewer’s valuable comment that points out the statements that could possibly mislead the readers. The top illustration in Figure 1 depicts a typical depth-wise convolution. The bottom illustration shows the depth-wise convolution rearranged to be more favorable for GPU processing. On the left is an image with (a) structured pruning applied, and on the right is after applying (b) DEPrune. Thus, the relationship between the top and bottom illustrations indicates before and after the Diagonalwise Refactorization is applied, and the figure shows the arrangement of the weight. We will enhance the caption with these details to ensure there is no misunderstanding for readers. --- Rebuttal Comment 1.1: Comment: Thanks the authors’ reply. My concerns are addressed.
Summary: This paper presents DEPrune, which prunes depthwise separable convolution models into a structured sparsity pattern that is friendly to depthwise refactorization. Strengths: 1. Much of the motivation for conducting CNN pruning is to pursue the most lightweight — often time edge-deployed — model run under much resource constraint. Depthwise separable convolution (DSConv)-based models have a proven track record in this regard, and it is true that there are very few attempts to investigate how to prune such models. So, the exploration done in this paper is a welcoming addition to the field. 2. The authors clearly demonstrate the further complications caused by filter pruning DSConv models and address them by proposing a structured sparsity pattern that is friendly to depthwise refactorization — a common application for improving the utility of DSConv — along with many other submodules. The DEPruned models show real end-to-end speedup. 3. Tables 3 & 4 show DEPrune delivers better accuracy and runtime results than applying typical structured pruning methods to DSConv models. Weaknesses: 1. Most of the structured pruning methods compared in Table 4 is a bit old. Although I suspect the conclusion won't change much due to the gap present and the fact that methods like CC [1] are often plenty performant even for today, still, a comparison with more modern structured pruning methods should be included. 2. The authors emphasize that "there are no previous pruning techniques that even target depth-wise convolution (DW-conv)" in line 4. What about methods like [2] that address the exact same DSConv? 3. The paper did a good job explaining each of its components in detail but lacks a clear high-level overview of how these components are attached together — some abstract visualization with a detailed walkthrough in the caption, as well as a pseudo-code section, should be added. Similarly, the experiment procedure lacks clarity: only the epoch budget of CIFAR10 experiments is shared, and critical details like the learning rate for finetuning, whether the process is iterative or post-prune, etc, are missing. 4. No throughput or latency (bs = 1) results. 5. Not really a weakness, but DSConv pruning is a relatively niche art with limited familiarity with the pruning community. This paper would benefit from making better contrasts with standard conv pruning to highlight its unique challenges. 6. Is this DSPrune or DEPrune? The proposed method family has so many submodules named with acronyms, and it could really use a richer Table 1 for a more friendly reading experience. [1] Towards Compact CNNs via Collaborative Compression [2] Pruning Depthwise Separable Convolutions for Extra Efficiency Gain of Lightweight Models [3] Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions [4] Structured Sparsity in the NVIDIA Ampere Architecture and Applications in Search Engines Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Why are the efficiency metrics of any methods missing in Table 4? 2. Can the DCP part be applied to general group convolution (where the number of groups != number of channels)? 3. Following Q1, some of the structured pruning methods in the GKP [3] series are known to be able to maintain the output shape of a pruned conv layer by leveraging group convolution. Can this be applied to the PWConv part of the DSConv and therefore avoid the concerns introduced in Section 4.2 and Appendix 6? 4. Is a DEPruned model zero-padded? It looks like the case in Figure 4. If so, what kind of peak memory usage are we seeing pre-and-after pruning? 5. How would this work compare to general structured sparsity approaches with more pruning freedom? e.g., the N:M sparsity [4]? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s time and effort in providing constructive feedback on this manuscript. We will incorporate all of the suggestions into the final version of the paper. [m-\#] means manuscript's reference. **Weakness 1)** As the reviewer pointed out, it is important to compare our proposed scheme with recent prior work. In Table 4, we have compared our scheme with two of the most recent research papers [m-11 and m-48], both presented in 2024. If you have any recommendations for recent structured pruning methods, we would be eager to compare them with our DEPrune if circumstances allow. **Weakness 2)** We appreciate your comment. The study you mentioned [2 and m-44] indeed applies an existing gradual pruning method to DSConv, so it is not accurate to state that there are no previous pruning techniques that target DW-Conv. We will revise this statement in the paper if given the opportunity to make revisions: "there are hardly any previous pruning techniques that target depth-wise convolution (DW-conv)" in line 4. We also want to mention that the pruning method proposed in [2 and m-44] is focused on applying gradual pruning [m-55] to DSConv layers, which differs from our approach. The research [2] focuses on removing filters to preserve structural pattern on DW-conv. Whereas our research considers GPU kernel and maintains structural pattern while more fine-grained pruning on DW-conv, to reduce accuracy loss. **Weakness 3)** Thank you for your valuable advice. Based on your feedback, we will make the following improvements: 1. We will rewrite the captions to provide a more detailed walkthrough, ensuring that readers can easily understand them. 2. To enhance reproducibility, we will add a pseudo-code section to the appendix and plan to release our implementation in the future. 3. We will include the ImageNet dataset information in the experiment section to clearly outline the experimental procedure: _[ImageNet experiment setting] learning rate : 0.001, Fine-tuning epoch : 65epochs, learning rate epoch : dividing 10 per every 30epochs, Process : iterative pruning, Weight decay : 1e-4, Optimizer : SGD, Momentum : 0.9, Pre-trained model : PyTorch, All data are augmented with random crop and randomly horizontal flip._ **Weakness 6)** Thank you for pointing out our typo. Our proposed scheme is named DEPrune, and the mention of DSPrune is indeed a typo. We will thoroughly review our paper again and correct this and any other typographical errors in the revised version. We also think that so many names can be very difficult to understand. We will double-check the Table 1 and writing of the paper so that it can provide simple and clear concept of DEPrune. As you suggested, creating a richer table will likely make it easier for readers to understand the paper. **Question 1)** We believe there are various efficiency metrics, such as peak memory usage, computation throughput, and computation utilization, that can be measured in our experiments. Among these, we consider peak memory usage to be the most important efficiency metric, thus, we will include it in Table 4 in the revised paper. If the reviewer can suggest other efficiency metrics that should be included, we will measure them and incorporate the results in the revised version. **Question 2)** Applying DCP to general group convolution can be challenging. When the number of groups and channels differ, the rearrangement method for GPU processing varies, complicating the application of DCP. However, the enhanced method HSR, proposed in our paper, can be effectively applied in these scenarios. We believe that combining conventional structured pruning with our HSR could lead to improved inference time performance on the general group convolution. **Question 3)** Thank you very much for your insightful feedback. The paper you mentioned, GKP [3], seems an excellent work. GKP divides weights into groups and applies vector-wise pruning to each group, effectively addressing the issue described in Section 4.2. As you pointed out, when GKP is applied, it does not impact the subsequent layer, thereby avoiding the problem discussed in Section 4.2. We appreciate you bringing this important paper to our attention, and we will consider incorporating this research (GKP) with our DEPrune as a potential future work. **Question 4)** According to Table 4 of paper [m-38], the extra overhead in total memory consumption due to zero-padding is approximately 0.3\%. To assess the impact of DEPrune-BH, we measured and presented the peak memory usage of MobileNet-v2 before and after applying DEPrune-BH with a 50\% pruning ratio, as shown in Author-Rebuttal-Table 1. Before applying DEPrune-BH, the peak memory usage is 7.22 MB, whereas after application, it decreases to 3.63 MB, representing a reduction of approximately 49.8\%. **Question 5)** Research on n:m sparsity is currently very active in the field of pruning. However, this sparsity approach has two major limitations: a lack of flexibility and the requirement for specialized hardware. First, it lacks flexibility because it is fixed at a 50\% pruning ratio, specifically 2:4 pruning [m-35]. As seen in Author-Rebuttal-Table.2, we conducted comparative experiments between NVIDIA's n:m sparsity and DEPrune on MobileNet-v2 using CIFAR-10. At the same pruning ratio of 50\%, DEPrune-B achieves 0.31\% higher accuracy than n:m sparsity. This is because DEPrune-B achieves a 50\% pruning ratio within $32 \times k \times k$ parameters, whereas n:m sparsity achieves a 50\% pruning ratio within a parameter size of 4, which is $8 \times k \times k$ times smaller than DEPrune-B, leading to a accuracy drop [m-21]. Secondly, in n:m sparsity, achieving optimal performance requires specialized hardware (NVIDIA A100) that can quickly handle index processing [m-35]. In contrast, our approach requires only a customized GPU kernel for processing. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal. Can we have the bs=1 latency report (W4)? Comment: I am satisfied with the authors' responses. However, it looks like W4 remains unaddressed. Is it possible to have a latency report with a batch size of 1? My original Q1 has a typo. It should be "Why are the efficiency metrics of some methods missing in Table 4?" I hope the authors can also follow up on this. Thanks. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the additional feedback and for rephrasing the question about efficiency metrics. [m-\#] means manuscript's reference. **Weakness 4) [Is it possible to have a latency report with a batch size of 1?]** We apologize that we could not respond to your comments on W4 due to the word limit (restricted to within 6,000 characters). The inference time using a batch size of 1 is provided in Rebuttal-Table 1. As shown in Rebuttal-Table 1, when the batch size is reduced to 1, DEPrune-BH demonstrates 1.90 and 1.57 times faster performance compared to the unpruned model and CafeNet-R [m-41], respectively. We observe reduced pruning's performance improvements when the batch size decreases from 32 to 1. The primary reason for this degradation is that GPUs rely heavily on a high level of thread-level parallelism (TLP). As the batch size increases, the number of scheduled threads (corresponding to the size of GEMM operations in NNs) also increases, allowing GPUs to achieve significant performance gains due to their ability to execute many threads simultaneously. Conversely, when the batch size is reduced, the degrees of TLP decrease (with the size of GEMM operations becoming smaller in NNs), leading to underutilization of GPU computation units and resulting in many execution units sitting idle for longer periods. Due to various characteristics of the GPU, reducing the batch size does not lead to a linear decrease in latency [1,2]. Due to the limited time window of the rebuttal, we were only able to measure the performance of EfficientNet-B0 at this time. In the future, we will share additional data for other models as well. Thank you for the helpful review. [1] "Accuracy-constrained efficiency optimization and GPU profiling of CNN inference for detecting drainage crossing locations." Proceedings of the SC'23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis. 2023. [2] "Variable batch size across layers for efficient prediction on CNNs." 2020 IEEE 13th International Conference on Cloud Computing (CLOUD). | **Method** | **DW-conv Pruning Ratio** | **PW-conv Pruning Ratio** | **Time (us)** | **Speed Up** | |:-----------------:|:-------------------------:|:-------------------------:|:-------------:|:------------:| | EfficientNet-B0 | - | - | 1276 | 1.00x | | CafeNet-R [m-41] | 30.2\% | 30.2\% | 1055 | 1.21x | | CafeNet-E [m-41] | 26.4\% | 26.4\% | 1072 | 1.19x | | DEPrune-BH | 84.7\% | 62.0\% | 672 | 1.90x | [Rebuttal-Table.1 : Comparison of inference time with DEPrune-BH and other pruning on ImageNet. We set the batch size to 1.] **Question 1) [Why are the efficiency metrics of some methods missing in Table 4?]** Thank you for the clarification. The reason some metrics are not included in Table 4 is that the official experimental data were not disclosed in the corresponding pruning paper. To ensure a fair comparison, we only included the metrics provided in the official paper. Since some models only reported pruned FLOPs and did not provide pruned parameters, it is difficult to accurately measure the exact inference time based on that information alone. For those models that provide pruned parameters, we measured only inference time of the pruning methods by carefully reviewing the available information.
Summary: The paper addresses an important topic of pruning depth-wise separable convolutions (DSConv) called DEPrune. While structural model pruning methods like channel pruning can achieve significant speed-up for regular convolutions, they cannot secure notable speed-up on DSConv layers as they mainly prune the point-wise convolution, which is a small fraction of the compute for DSConvs. Further, naively pruning the depth-wise channels significantly reduces its capacity, leading to severe performance degradation. DEPrune prunes the depth-wise convolutions in a fine-grained manner yet achieves structural sparsity to enable practical speed-up on GPUs. It also introduces two techniques BWT and HSR to further improve the performance. Strengths: 1. The paper is well-written. The motivations behind the design choices are clearly explained. 2. The problem of pruning DSConvs is an important topic as DSConvs are widely adopted for edge applications like MobileNets. Therefore, further increasing their efficiency is of significant interest to the community. 3. Extensive experimental results validate the design choices and show clear speed-up compared to the original models. Weaknesses: I cannot think of a major weakness in the paper. I hope that the authors provide more background for the readers not directly familiar with the concepts of "alignment in GPUs" or refer to proper references. Technical Quality: 4 Clarity: 3 Questions for Authors: I think the paper is interesting and can be useful in practical applications. Currently, as far as I know, DSConvs are not used in computationally intensive architectures like the U-Net models in diffusion models. Eventually, when more efficient architectures using DSConvs are developed, the proposed DEPrune can further improve the performance. I hope the authors release their implementations so that the community can benefit from them. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The limitations are explained in the supplementary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our gratitude to the reviewer for their constructive and insightful feedback. Below, we have provided our responses to the comments. If the paper is accepted, we will incorporate these helpful suggestions into the camera-ready version. **Weakness 1) [I hope that the authors provide more background for the readers not directly familiar with the concepts of "alignment in GPUs" or refer to proper references.]** We agree with the reviewer’s comment that the concepts of "alignment in GPUs" should be presented in our paper for better understanding. Therefore, if given the opportunity to revise our paper, we will include these details, referencing prior work [1, 2]. [1] Sparse GPU kernels for Deep Learning, SC 2020 [2] GUIDE, Design. CUDA C++ programming guide. NVIDIA, July, 2020. **Question 1) [I hope the authors release their implementations so that the community can benefit from them.]** We are willing to provide our implementation once the paper is accepted to contribute to the community.
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers' time and effort in providing constructive and valuable feedback on this manuscript. In response to the reviewers' questions, we have conducted evaluations: 1. Reviewer CSWm's Question 4: We provided peak memory usage data presented in Author-Rebuttal-Table 1. 2. Reviewer CSWm's Question 5: We provided n:m sparsity data presented in Author-Rebuttal-Table 2. 3. Reviewer bd55's Weakness 2: We provided ConvNeXt-Base data presented in Author-Rebuttal-Table 3. 4. Reviewer VAUi's Weakness 1: We provided pre-processing time results in Author-Rebuttal-Table 4. 5. Reviewer VAUi's Weakness 3: We have provided edge GPU exeperiment results in Author-Rebuttal-Table 5. Pdf: /pdf/65f0a2132821cb3d2ec5468c4224fa1afb97c0eb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes
Accept (poster)
Summary: The paper investigates topological reasoning for lane graph prediction for autonomous driving. The paper focuses on improving the prediction of the topological structure of the lane graph. Therefore, they propose two new mechanisms, the first is a geometric approach that estimates connectivity based on the distance between the end point and starting point of center lines. Secondly, they propose a semantic approach which uses the similarities of lane queries in a high dimensional space. These two approaches stand in contrast to existing works which mainly use MLPs to predict the connectivity of the lanes. They show how these approaches can be interleaved into a lane graph network and how it improves results, especially the topological reasoning results. Even adding only the geometric approach to existing methods improves the topological reasoning results. Strengths: The paper is well written and easy to follow. The paper and results highlight the importance of geometric cues when predicting conductivity. Or in other words, the fact that the geometric approach works so well shows that there is a lot of potential in current methods. Furthermore, the geometric approach can be integrated into existing networks or combined with other approaches and allows for competitive lane topology prediction. Both the geometric and semantic prediction branches are clear and make sense and are able to improve results for topological metrics. Weaknesses: - It is not clear to me why the paper does not compare with TopoMLP, the numbers for Table 2 v1.0.0 can just be taken out of the paper. - When comparing with TOP_lt TopoMLP outperforms TopoLogic and overall performance is very similar, in this case both methods use the same backbone. - The paper highlights that geometric cues in predicting the topology of the lane graph are underutilized which is interesting, but I am not fully sure if this is a Neurips paper, since technical contributions are marginal. - See questions Technical Quality: 3 Clarity: 3 Questions for Authors: - Given that f_ours is just a gaussian function with an adapted decision boundary. I think the fair comparison would be with a hand tuned decision boundary, for example by looking at end point errors of a trained model. - f_tanh in equation 6 seems to map to [0, 2] and not to [0, 1] - Equation 10 is not a [0, 1] probability anymore, to remain this property you would need to take a convex combination of the two values. Do you perform some sort of normalization? - If you say a variable is 1x1 do you mean it is a real number? If so just write \in R - The batch size of 2 on 8 GPUs seems strange, is it a batch size of 2 per GPU, so an actual batch size of 16? - How do you handle traffic elements? - Typos: - L90: . some methods - not capitalized after full stop - L204: Both models - three models are listed before - L236: ),especially - L259: Table 4 Indicate show Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Has been discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your careful reading of our paper. We hope our response and clarifiction can ease some of your concerns and you could reconsider your rating.** **Q1:** Why the paper does not compare with TopoMLP? **A1:** Thank you for your insightful question. I will respond to it in detail. Although both TopoMLP and TopoLogic use ResNet-50 as their backbone, TopoMLP is implemented using PETR, while TopoLogic, TopoNet, SMERF, and LaneSegNet are based on DETR. **Unfortunately, TopoMLP could not be run on the NVIDIA RTX 3090 GPU due to its high memory consumption. As a result, we were unable to perform a comparative analysis with TopoMLP at that time due to equipment limitations.** We apologize for this inconvenience. To address this issue, **we have implemented TopoLogic based on PETR and conducted experiments on Tesla A100 GPU**. The experimental results are provided below, and we will include these results in the final version of the paper. | Method | DET$_l$ | DET$_t$ | TOP$_{ll}$ | TOP$_{lt}$ | OLS | | ---------- | -------- | -------- | -------- | -------- | -------- | | TopoMLP | 28.3 | **50.0** | 19.0 | 23.4 | 42.2 | | TopoLogic | **29.9** | 47.2 | 23.9 | 25.4 | 44.1 | | TopoLogic\* | 29.7 | 49.5 | **25.4** | **26.8** | **45.3** | The experiments were conducted on subset_A, OpenLane-V2 version 2.1.0. In this context, **TopoLogic\* denotes the version of TopoLogic implemented using PETR.** ---- **Q2:** TopoMLP outperforms TopoLogic on TOP$_{lt}$? **A2:** As indicated in the table in A1, TopoMLP does not outperform TopoLogic on TOP$\_{lt}$ (25.4 v.s. **26.8**). We implemented TopoLogic using PETR, and it shows improved performance on TOP$\_{lt}$ (**26.8** v.s. 25.4 v.s. 23.4). ---- **Q3:** Not fully sure if this is a Neurips paper, since technical contributions are marginal. **A3:** Thanks. The performance can **significantly improved** (**+11.4** TOP$\_{ll}$ in Table 5) even if it is only used as post-processing, and can be **further improved** (**+13.0** TOP$\_{ll}$ in Table 1) after integrated training. Such an improvement exceeds the improvement of previous works in a large margin, which **in itself** is a valuable contribution to the field. **reasonable analysis** and **effective verification** are our academic contributions, which will be beneficial to promote the development of the field. ---- **Q4:** Fair comparison about mapping function. **A4:** Thank you for your insightful feedback. We appreciate your suggestion to ensure a fair comparison by considering a hand-tuned decision boundary. In response to your concern, we conducted additional experiments to further validate the performance of $f_{ours}$ Gaussian function with its adaptive decision boundary. To determine the decision boundary, we explored two hyperparameters: $\alpha$ and $\lambda$. We evaluated the TOP$\_{ll}$ metrics across various combinations of these hyperparameters. The results are presented in the following table: | $\alpha$ | $\lambda$ | TOP$\_{ll}$ | | $\alpha$ | $\lambda$ | TOP$\_{ll}$ | | ---------- | --------- | ---------- | ---- | -------- | ----------- | ---------- | | 2.0 | 0.23 | 20.4 | | 1.3 | 0.06 | 16.0 | | 1.8 | 0.23 | 21.0 | | 1.3 | 0.10 | 19.7 | | 1.6 | 0.23 | 21.8 | | 1.3 | 0.14 | 21.6 | | 1.4 | 0.23 | 22.5 | | 1.3 | 0.18 | 22.7 | | **1.3** | **0.23** | **23.9** | | **1.3** | **0.23** | **23.9** | | 1.2 | 0.23 | 22.8 | | 1.3 | 0.26 | 23.1 | As the table illustrates, **the decision boundary with $\alpha=1.3$ and $\lambda=0.23$ yielded the best performance in terms of the TOP$\_{ll}$ metric.** This finding aligns with our original experimental results and underscores the optimization effectiveness and advantages of the $f_{ours}$ Gaussian function. We hope these additional results address your concerns regarding fair comparison and demonstrate the robustness of our approach. ---- **Q5:** $f_{tan}$ in equation 6 seems to map to [0, 2] and not to [0, 1]. **A5:** Thank you for your detailed consideration. In fact, since $f$ maps endpoint distances, so $x$ is always greater than or equal to 0 rather than unbounded, ensuring that the range of these functions is always within [0, 1]. ---- **Q6:** Equation 10 is not a [0, 1] probability, and do you perform some sort of normalization? **A6:** Thank you for your attention. We did not perform normalization. The two coefficients, λ₁ and λ₂, in Equation 10 are learnable and can be trained to range between [0,1]. In other words, λ₁ + λ₂ approaches 1. ---- **Q7:** If you say a variable is 1x1 do you mean it is a real number? If so just write \in R. **A7:** Yes, it's not necessary to say 1x1 shape for scalars, and we're going to modify the expression. ---- **Q8:** The batch size of 2 on 8 GPUs seems strange, is it a batch size of 2 per GPU, so an actual batch size of 16? **A8:** Your understanding is correct. To be precise, the batchsize of each GPU is 2, so the total batchsize of 8 GPUs is 16. It's our fault to make it unclear. ---- **Q9:** How do you handle traffic elements? **A9:** For the traffic element treatment, we adopt the same strategy as baseline method TopoNet, i.e., predict 2D bounding box of traffic elements using DETR, and reason topology between them with lane utilizing vanilla MLP. Since our work focused on lane topology, we rarely mentioned traffic elements, which will be added in the final version. ---- **Q10:** Typos. **A10:** Thank you for carefully pointing out these typos. We will diligently correct these issues. **We hope this response could help address your concerns, and wish to receive your further feedback soon.**
Summary: Reasoning about lane topology is becoming more and more important for the autonomous driving community. Current methods mostly focus on improving the perception performance and ignore the reasoning part of the task. The authors raise the importance of the relationship between lanes, namely the lane distance and lane similarity. The lane distance is determined by the geometry distance between the starting and ending points of the lanes. Then, the connectivity is decided based on the result of a learnable threshold function. The lane similarity is calculated based on the lane queries, and a large similarity indicates a large possibility of connectivity. Both lane distance and lane similarity modules serve as plug-in modules to existing networks. Experimentally, both modules have significantly improved the existing methods. Strengths: - The authors observed that current methods on lane topology reasoning mostly focus on improving perception accuracy but ignore topology reasoning. The proposed modules serve as a good posting-processing technique for topology reasoning. - The proposed modules are effective and can be placed in existing networks. Weaknesses: - In the experiment session, the authors provide scores on lane segment prediction. However, the lane segment is different from the lane centerline. Only the lane centerline is discussed in the methodology session. - It is not stated which split of the dataset is used for experiments. - The mixture of using different versions of the metric is confusing (vx.0 and vx.1). As stated in the repository of the dataset, this difference leads to different TOP scores. For instance, in Table, metrics in v2.0 and v2.1 are both listed, which is unreasonable. - The two proposed modules focus on topology reasoning. However, experimentally, the detection scores of the proposed pipeline also improved compared to other methods. This improvement is not discussed. Technical Quality: 2 Clarity: 2 Questions for Authors: - The two proposed modules are plug-in modules on current networks, and the proposed pipeline is trained in an end-to-end manner. However, as these two modules focus on topology reasoning, and the final topology prediction is decided by these two modules, it is curious to know how these two modules benefit existing networks. Namely, only parameters in the proposed modules are trained, and the remaining weights of the existing networks are frozen. - For the second proposed module, the lane similarity module. Logically, the lane queries usually encode positional information of lanes. It is curious to know why improving the similarity between two connected lanes would improve topology reasoning performance. That is, the only similar thing between two connected lanes is the ending point of one lane and the starting point of another. Two parallel lanes should have a larger similarity than two connected lanes. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your careful reading of our paper. We hope our response and clarifiction can ease some of your concerns and you could reconsider your rating.** **Q1:** Lane segment is not discussed in the methodology session. **A1:** Thanks for your attention to this detail, which is indeed omitted from the Methods section. However, lane segment is just **another more refined representation of the lane**, and the centerline can still be **easily extracted** from the lane segment, so that our algorithm does not need any special processing for the lane segment. ---- **Q2:** It is not stated which split of the dataset is used for experiments. **A2:** Thanks for your mind. Perhaps we should this information in the caption of tables, even although these information can be found on line 233-234 and 183-185 (SOTA comparison, Table 1 on subset_A/B & Table 2 on subset_A) and line 248-250 (ablation study, Table 3-5 on subset_A). ---- **Q3:** The mixture of using different versions of the metric is confusing (vx.0 and vx.1). **A3:** It is indeed unusual to using two versions of metric, but it is a legacy issue in the lane topology field, as can be seen in the Openlane-V2 official repository ([Update on OpenLane-V2 Metric · Issue #76 · OpenDriveLab/OpenLane-V2 · GitHub](https://github.com/OpenDriveLab/OpenLane-V2/issues/76)). We provide two versions of the metric **for comparison with more methods**. Moreover, it should be noted that our method has a **significant advantage** over each other **regardless of which version of metric**. ---- **Q4:** Why detection scores of the proposed pipeline also improved compared to other methods? **A4:** The reason for this can be found in the caption of Figure 2 and Introduction (Line 61-62), and we will add further explanation in the experiment section. Note that the proposed modules serve in **each** layer of the decoder, where geometric distance topology and lane similarity topology are fused into the final lane topology. This lane topology is facilitated to augment lane learning **by GNN aggregating features from adjacent lanes** in the **next** decoder layer. Thus, Improvement on TOP$\_{ll}$ corresponds to increased DET$_l$(**28.6** to **29.9** (**+1.3**) in Table 1). ---- **Q5:** How these two modules benefit existing networks? **A5:** Thank you for your valuable suggestions. We carried out experiments by freezing the network parameters of TopoNet and incorporating the similarity and geometric distance modules for end-to-end training. The results in the table indicate that the two modules introduced in TopoLogic significantly enhance TopoNet. Additionally, training TopoLogic end-to-end yields better performance compared to freezing a portion of the parameters. | Method | DET$_l$ | DET$_t$ | TOP$\_{ll}$ | TOP$_{lt}$ | OLS | | ------------------------------------- | -------- | -------- | ---------- | ---------- | -------- | | TopoNet (baseline) | 28.6 | **48.6** | 10.9 | 23.8 | 39.8 | | TopoLogic (freeze TopoNet parameters) | 29.2 | 46.9 | 22.8 | 24.8 | 43.4 | | TopoLogic | **29.9** | 47.2 | **23.9** | **25.4** | **44.1** | ---- **Q6:** Why improving the similarity between two connected lanes would improve topology reasoning performance? **A6:** Thank you for your valuable question; it is very meaningful. The similarity module proposed in TopoLogic calculates the similarity of **lane endpoints** rather than **the entire lane itself**. As shown in Figure 1, the lane query passes through **two independent MLPs** to encode representations, computing the similarity between **the starting and ending points** of two directed lanes. Additionally, to address your inquiry, we conducted experiments with MLPs as shown in the table. **The "No MLP" configuration does not encode the lane query, calculating the similarity of the entire lane. In contrast, the "Single MLP" configuration computes the similarity of the same endpoint across two lanes.** The table indicates that employing two independent MLPs to encode lane queries for calculating the similarity between the starting and ending points of directed lanes yields the best performance, thereby validating the effectiveness of the TopoLogic design. | Method | DET$_l$ | DET$_t$ | TOP$\_{ll}$ | TOP$_{lt}$ | OLS | | -------------------- | -------- | -------- | ---------- | ---------- | -------- | | No MLP | 25.6 | 46.5 | 18.7 | 20.8 | 40.2 | | Single MLP | 27.5 | 46.8 | 21.2 | 23.8 | 42.3 | | Two independent MLPs | **29.9** | **47.2** | **23.9** | **25.4** | **44.1** | **We hope this response could help address your concerns, and wish to receive your further feedback soon.** --- Rebuttal Comment 1.1: Comment: Thank you for your feedback. I still got some question. For Q2, are the scores reported on the validation set or test set? For Q5, the DET_t of the frozen TopoNet drops compared to the baseline version, are there modules which could affect the traffic element detection branch? --- Rebuttal 2: Comment: **Thank you very much for your quick response.** **Q7:** For Q2, are the scores reported on the validation set or test set? **A7:** We're sorry we didn't mention that detail, and we will add more clarification in the final version. In fact, all the results are reported on the **validation set**, which is **completely consistent** with the protocol of the **all reference paper**, e.g., TopoNet, SMERF, LanSegNet, etc. As for test set, it is only used for the closed-evaluation of the OpenLane-V2's challenge(https://opendrivelab.com/challenge2024/#mapless_driving), which was not available at the time of our research. ---- **Q8:** For Q5, the DET$_t$ of the frozen TopoNet drops compared to the baseline version, are there modules which could affect the traffic element detection branch? **A8:** We're sorry to confuse you. Either similarity module or geometric distance module just involves the **lane**, not **traffic elements**, so DET$_t$ really shouldn't drop. However, in the freezing experiment we loaded the weights of the TopoNet that we reproduced, whose DET$_t$ in itself is lower (**46.9** v.s. **48.6**) than reported in the TopoNet paper. It seems to be **a bug in baseline's official code**. The reproduction about DET$_t$ was always **unstable and lower** than the value reported in the original paper, and this phenomenon can also be observed in the results of other paper (e.g., **44.5** DET$_t$ of TopoNet in Table 1 in paper SMERF$^{[1]}$) [1] Luo K Z, Weng X, Wang Y, et al. Augmenting lane perception and topology understanding with standard definition navigation maps[J]. arXiv preprint arXiv:2311.04079, 2023. **If you have any further questions or suggestions about our article, we'd love to discuss our content with you.**
Summary: The authors claim previous topology reasoning methods typically boost reasoning performance by enhancing the perception of lanes and directly adopt MLP to learn lane topology from lane query. In this work, the authors propose to make full use of lane geometric distance and lane query similarity for topology reasoning. The proposed method, named TopoLogic, mitigates the impact of endpoint shifts in geometric space, and introduces explicit similarity calculation in semantic space as a complement. The method achieves SOTA results in OpenLane-V2 dataset. Strengths: 1. The idea of paying more attention on geometry and similarity to help topology reasoning makes sense. It makes the reasoning process more interpretable and robust. 2. The performance gain brought by the proposed method is significant. The authors provide adequate ablation experiments to validate the design. 3. Many qualitative results are presented to show the effectiveness of the proposed method. The conection between lane endpoints are obviously improved. Weaknesses: 1. Like an incremental work based on TopoNet. The contribution is limited. The contributed part is only a similarity module for mapping lane line endpoints. But the design is simple and straightforward, not surprising enough. 2. The proposed GeoDist module is a post-processing algorithm. It's like an engineering improvement rather than academic contribution. Technical Quality: 3 Clarity: 3 Questions for Authors: When compared with previous methods (like TopoNet, SMERF), what about the computation overhead and the difference in network architecture? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Refer to the Weaknesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your careful reading of our paper. We hope our response and clarifiction can ease some of your concerns and you could reconsider your rating.** **Q1:** Like an incremental work based on TopoNet. The contribution is limited. The contributed part is only a similarity module for mapping lane line endpoints. But the design is simple and straightforward, not surprising enough. **A1:** Thanks for your comments. Compared to TopoNet, our modification looks like small, but that doesn't mean the contribution is small. - We clearly point out that TopoNet and other methods ignore the endpoint-connection characteristics between the lanes when topological reasoning, and design **a module based geometric distance** and **a module based on semantic similarity** to **significantly improve** the topological performance (**+13.0** TOP$\_{ll}$ in Table 1) in the case of **reducing the number of parameters and the amount of computation**. Therefore, the contributed part is not limited in similarity module. - As similarity module, its subtlety lies in that it uses **two independent MLPs** to map the lane query **rather than a single MLP**, which can **decouple a lane into two queries of start and end point**, achieving an **analogous** effect to the (endpoint) geometric distance module **in semantical space**. This opinion can be verified by the following experiments, so it is not simple and straightforward. | | DET$_l$ | DET$_t$ | TOP$\_{ll}$ | TOP$_{lt}$ | OLS | | -------------------- | ------- | ------- | ---------- | ---------- | ---- | | No MLP | 25.6 | 45.9 | 18.7 | 20.8 | 40.0 | | Single MLP | 27.5 | 46.8 | 21.2 | 23.8 | 42.3 | | Two independent MLPs |**29.9** | **47.2** | **23.9** | **25.4** | **44.1** | ---- **Q2:** The proposed GeoDist module is a post-processing algorithm. It's like an engineering improvement rather than academic contribution. **A2:** Thanks for your comments, but we don't think it's just an engineering improvement. - The GeoDist module is not just used as post-processing after training. This module can be used (in conjunction with Similarity module) at each layer of the decoder, so that the lane topology is enhanced geometrically and semantically at each layer and refined layer by layer. In addition, lane topology in layer is also facilitated by GNN to augment lane learning in layer through aggregating features from adjacent lanes. Using GeoDist module in an integrated training manner provides more performance gains (**+13.0** TOP$\_{ll}$ in Table 1) than just post-processing (**+11.4** TOP$\_{ll}$ in Table 5). - This improvement is not an accident attempt, but comes from our **accurate and highly interpretable understanding** of **the endpoint-connection characteristics** of the lane topology, which makes it apart from simple engineering improvements. All in one, **reasonable analysis** and **effective verification** are our academic contributions. ---- **Q3:** When compared with previous methods (like TopoNet, SMERF), what about the computation overhead and the difference in network architecture? **A3:** Thank you for your valuable question. The key distinction between TopoLogic and both TopoNet and SMERF is that **TopoLogic introduces similarity module and geometric distance module for lane topology reasoning, whereas TopoNet and SMERF only use MLP for lane topology reasoning.** Here, we provide some computational benchmark as follows, and we will include the experiment result in the final version of the paper. | | SDMap | #PARAMS | FLOPS | FPS | | ----------------------- | ----- | ------- | ------ | ---- | | TopoNet | false | 38.6M | 712.1G | 10.5 | | TopoLogic(TopoNet ver.) | false | 37.8M | 665.0G | 10.8 | | SMERF | true | 39.4M | 720.1G | 15.3 | | TopoLogic(SMERF ver.) | true | 38.7M | 678.2G | 15.8 | **We hope this response could help address your concerns, and wish to receive your further feedback soon.** --- Rebuttal Comment 1.1: Title: Further questions Comment: Thanks for the authors' feekback. My further questions are as follows: Why does TopoNet-based TopoLogic and SMERF-based TopoLogic run faster and have less params and flops compared with original TopoNet and SMERF? Is the post-processing step (GeoDist module) included? --- Reply to Comment 1.1.1: Comment: **Thank you for your insightful question.** **Q4**: Why does TopoNet-based TopoLogic and SMERF-based TopoLogic run faster and have less params and flops compared with original TopoNet and SMERF? **A4**: TopoNet and SMERF both compute lane topology directly using MLP. In contrast, TopoLogic calculates lane topology through a geometric distance module and a similarity module. This distinction highlights the structural differences between TopoLogic and both TopoNet and SMERF. In the TopoNet and SMERF, two MLPs are used to generate $Q_{emb} \in \mathbb{R}^{N \times C}$, where $N$ is the number of query. Subsequently, $Q_{emb}$ are repeated to $Q_{emb}^{'} \in \mathbb{R}^{N \times N \times C}$. **They are then concatenated and processed through $\operatorname{MLP_3}$ to compute the lane topology**, as described by the following Equations: $$ Q_{emb_1}, Q_{emb_2}=\operatorname{MLP_1}(Q_l^i), \operatorname{MLP_2}(Q_l^i) \in \mathbb{R}^{N \times C} $$ $$ Q_{emb_1}^{'},Q_{emb_2}^{'}=\operatorname{Repeat}(Q_{emb_1}),\operatorname{Repeat}(Q_{emb_2}) \in \mathbb{R}^{N \times N \times C} $$ $$ \operatorname{G_{sim}}=\operatorname{MLP_3}(\operatorname{Concat}(Q_{emb_1}^{'}, Q_{emb_2}^{'})) \in \mathbb{R}^{N \times N} $$ For TopoLogic, we calculates lane topology through a geometric distance module and a similarity module. In similarity module, We directly compute lane topology via matrix multiplation between $Q_{emb_1}$ and the transposition of $Q_{emb_2}$, followed by applying a sigmoid activation function, as detailed in our paper: $$ Q_{emb_1}, Q_{emb_2}=\operatorname{MLP_1}(Q_l^i), \operatorname{MLP_2}(Q_l^i) \in \mathbb{R}^{N \times C} $$ $$ \operatorname{S}=\operatorname{matmul}(Q_{emb_1},\operatorname{transpose}(Q_{emb_2})) \in \mathbb{R}^{N \times N} $$ $$ \operatorname{G_{sim}}=\operatorname{sigmoid}(\operatorname{S}) \in \mathbb{R}^{N \times N}\\ $$ In geometric distance module, as illustrated in Equation (4) of our paper, we utilize only two parameters, $\alpha$ and $\lambda$. Additionally, the merge process involves two parameters, $\lambda_1$ and $\lambda_2$, as shown in Equation (10) of our paper. All these parameters are scalar values, i.e., $\alpha,\lambda, \lambda_1, \lambda_2\in\mathbb{R}$. To summarize, while TopoNet and SMERF models involve **three MLPs** with associated parameters, TopoLogic utilize only **two MLPs** in the similarity module, two parameters $\alpha$, $\lambda$ in geometric distance m odule, and $\lambda_1$ and $\lambda_2$ in merge process. **In TopoLogic, the scalar parameters in the geometric distance module are significantly fewer in number compared to the MLP$_3$ parameters in TopoNet and SMERF.** Additionally, the MLP$_3$ results in a higher computational complexity. Therefore, TopoLogic has a smaller number of parameters (#PARAMS) and lower computational complexity (FLOPS), leading to faster speeds (FPS) compared to TopoNet and SMERF. **We hope this explanation addresses your concern, thank you again for your valuable feedback.**
Summary: This paper proposes an interpretable method for lane topology reasoning based on lane geometric distance and lane query similarity. The authors reveal that the lane topology is easily disturbed by the endpoint shifts. Based on this, the proposed post-processing module improves the robustness and performance of lane topology reasoning, which can be plugged into other methods without re-training. And it also alleviates the influence of inaccuracies in lane detection on topology reasoning. Extensive experiments on OpenLane-V2 demonstrate its state-of-the-art performance. Strengths: - The authors reveal the phenomenon that topology reasoning is highly susceptible to lane line endpoint shifts, providing a solid foundation and motivation for subsequent research. - The proposed post-processing module improves the robustness and performance of lane topology reasoning, which can be plugged into other methods without retraining. - The entire paper is easy to understand. The lane graph in Figure 5 shows very clearly the topological relationships between lanes and the differences from previous methods, which should be followed by other subsequent work. Weaknesses: - The proposed method is relatively simple and involves only post-processing modules, which at the same time leads to limited contributions and incremental improvements. - Considering that TopoNet is just a simple baseline model, the boost in topological reasoning is not surprising. Moreover, as can be seen in Table 1, the improvement of $TOP_{lt}$ is very small and $DET_{t}$ even decreases, which needs further analysis and discussion. Technical Quality: 3 Clarity: 3 Questions for Authors: - As can be seen in Figure 3, compared to other mapping functions, the final result of the learnable Gaussian mapping function has a much higher threshold. Therefore, compared to the present comparison experiments ($f_{gau}, f_{sig}, f_{tan}$), it may be better to manually set the hyperparameters for a series of gradients to observe its effect on the final performance. And I'm wondering why $f_{gau}, f_{sig}, f_{tan}$ are very close but differ dramatically on $TOP_{ll}$ in Table 3. - According to L146-149, the motivation of Lane Similarity Topology is to alleviate the influence of inaccuracies in lane detection on topology reasoning. However, the proposed method as a post-processing step does not significantly change the lane line detection results. In addition, the comparison results of the Lane Similarity Topology module should be added to Figure 4 to increase its persuasiveness. There are many typos in the paper, which should be further checked and polished. For example: L6: "are prone to" -> "is prone to" L12: "our methods provides" -> "our methods provide" L17: "boost" -> "boosting" Figure 1 caption: "reasoing" Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: This paper is limited in the post-processing for topology reasoning. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your careful reading of our paper. We hope our response and clarifiction can ease some of your concerns.** **Q1:** The proposed method is relatively simple and involves only post-processing modules, which at the same time leads to limited contributions and incremental improvements. **A1:** Thanks for your comments. - The proposed method, TopoLogic, is an end-to-end trained model with two key modules (i.e., Similarity & GeoDist) instead of just one simple post-processing. **The modules are plugged at each layer of the decoder, so that the lane topology is enhanced geometrically and semantically at each layer and refined layer by layer.** Once trained, the GeoDist module can be utilized as a plug-and-play post-processing component for any pre-trained lane topology reasoning model to enhance TOP$\_{ll}$. - Although the model is relatively simple, the performance improvement is significant (**10.9** to **23.9** (**+13.0**) on TOP$\_{ll}$ in Table 1) after end-to-end training. **The improvement in itself** is a valuable contribution to the field, and its **theoretical basis** (endpoint-connection characteristics of the lane topology) is another contribution to the field. ---- **Q2.1:** Considering that TopoNet is just a simple baseline model, the boost in topological reasoning is not surprising. **A2.1:** Thank you for your feedback. We are to blame for the ambiguity, and we will add more clarification in the final version. **TopoNet is the official recommended baseline evaluated on OpenLane-V2 benchmark, and recently many new model (e.g., SMERF, LaneSegNet) are actually based on TopoNet, so we claim vaguely that our TopoLogic is based on TopoNet.** We conducted experiments with TopoNet, SMERF, and LaneSegNet. As demonstrated in Table 1 and Table 2, TopoLogic consistently achieves superior performance compared to these approaches. **Q2.2:** The improvement of is very small and even decreases, which needs further analysis and discussion. **A2.2:** Thanks for your comments, and we will add more clarification in the final version. - It is **reasonable** that the improvement on TOP$_{lt}$ is smaller the improvement on TOP$\_{ll}$, as our method **mainly affects the lane-lane topology** and **does not deal with traffic elements specifically**. - For the same reason, DET$_t$ has not improved. As for decrease sometimes, it seems to be **a bug in baseline's official code**. When we reproduced the DET$_t$ of baseline, it was always **unstable and lower** than the value reported in the original paper. We also found that the DET$_t$ 's reproduced result in other papers (e.g. SMERF) were lower. ---- **Q3.1:** It may be better to manually set the hyperparameters for a series of gradients to observe its effect on the final performance. **A3.1:** Thank you for your feedback. We tested a series of manually set $\alpha$ and $\lambda$ values to observe their effects on performance. As is shown in following table, automatically learned hyperparameters $\alpha=1.3, \beta=0.23$ are actually superior to other manually set value, which frees us from the hassle of tuning parameters. | $\alpha$ | $\lambda$ | TOP$\_{ll}$ | | $\alpha$ | $\lambda$ | TOP$\_{ll}$ | | -------- | --------- | ----------- | ---- | -------- | --------- | ----------- | | 2.0 | 0.23 | 20.4 | | 1.3 | 0.06 | 16.0 | | 1.8 | 0.23 | 21.0 | | 1.3 | 0.10 | 19.7 | | 1.6 | 0.23 | 21.8 | | 1.3 | 0.14 | 21.6 | | 1.4 | 0.23 | 22.5 | | 1.3 | 0.18 | 22.7 | | **1.3** | **0.23** | **23.9** | | **1.3** | **0.23** | **23.9** | | 1.2 | 0.23 | 22.8 | | 1.3 | 0.26 | 23.1 | ---- **Q3.2:** Why $𝑓_{𝑔𝑎𝑢}$,$𝑓_{𝑠𝑖𝑔}$,$𝑓_{𝑡𝑎𝑛}$ are very close but differ dramatically on TOP$_{𝑙𝑙}$ in Table 3? **A3.2:** Thank you for your thoughtful question. We believe the counterintuitive phenomenon is primarily due to **the sensitivity of these functions to the threshold values**. To investigate this hypothesis, we conducted additional analyses by plotting Gaussian function curves with varying $\alpha$ and $\lambda$ parameters, as illustrated in Figure 1 and 2 of the Rebuttal PDF. **These figures and table in A3.1 show that while the threshold values do not exhibit significant changes, the TOP$\_{ll}$ scores vary substantially**, which is another reason why we adopt learnable mapping functions in TopoLogic. ---- **Q4.1:** The motivation of Lane Similarity Topology is to alleviate the influence of inaccuracies in lane detection on topology reasoning. However, the proposed method as a post-processing step does not significantly change the lane line detection results. **A4.1:** Nice question. We are to blame for the ambiguity, and we will add more clarification in the final version. Our motivation is indeed to alleviate the influence of inaccuracies lane detection on topological reasoning, however, our strategy is **not to directly improve lane detection**, but to **robustly improve lane topological reasoning** through the geometric and semantic information of the lane **in the case of inaccurate detection**, which is a brand new perspective first proposed by us for the topological reasoning task . ---- **Q4.2:** In addition, the comparison results of the Lane Similarity Topology module should be added to Figure 4 to increase its persuasiveness. **A4.2:** Thanks for your valuable suggestions, and we will add the module's visualized results to Figure 4. Since OpenReview's reply cannot directly attach pictures, please go to Rebuttal PDF Figure 3 to view. ---- **Q5:** Typos **A5:** Thank you for carefully pointing out these typos. We will diligently correct these issues. **We hope this response could help address your concerns, and wish to receive your further feedback soon.** --- Rebuttal Comment 1.1: Comment: Thanks for the authors' positive feedback. It seems that the results of $TOP_{ll}$ are very sensitive to the hyperparameters. And as shown in Figure 3 of the global rebuttal, the introduction of the similarity module removes many predictions of topology. But both results of (b) and (c) are far away from GT. I think that the existing metrics may not reflect the results well, and the filtering could lead to large variations. In addition, the authors are not able to clearly explain the results of the $DET$. The above makes the quantitative experiment less convincing. Overall, I still have concerns about incremental improvements and contributions, which are also mentioned by other reviewers. --- Reply to Comment 1.1.1: Comment: **We are very grateful that you have carefully reviewed our rebuttal response.** Although there are deficiencies in the details, we still believe that this work has sufficient contributions to the field: **we have significantly improved the core metrics (TOP$_{ll}$ & OLS) of the emerging task of topology reasoning on autonomous driving with a easily understandable method.** Regarding your further questions, we will answer them one by one below. We hope you can reconsider the rating of our article. ---- **Q6.1:** It seems that the results of TOP$_{ll}$ are very sensitive to the hyperparameters. **A6.1:** As you said, TOP$_{ll}$ is sensitive to hyperparameters, but **its actual negative effects are tiny**. - On the one hand, the learnable hyperparameters we used can effectively **self-adapt** to the data distribution and **avoid** the tedious manual parameter tuning. - On the other hand, the results under **all** hyperparameter configurations in our experiments are actually always **significantly higher** than the baseline (**16.0~23.9** v.s. 10.9 on Q3.1, **15.1~23.9** v.s. 10.9 on Table 3 in the paper), which further implies the effectiveness of our methods. ---- **Q6.2:** As shown in Figure 3 of the global rebuttal, the introduction of the similarity module removes many predictions of topology. But both results of (b) and (c) are far away from GT. I think that the existing metrics may not reflect the results well, and the filtering could lead to large variations. **A6.2:** We are very sorry that our **improper presentation** in Figure 3 gave you the misunderstanding. - To avoid the **visual confusion** caused by excessive topological connections, during visualization in Figure 4 of the main paper and its extended version Figure 3(b,c) of the Rebuttal PDF , only **the right lanes** (blue lines), **the incorrect lanes** (yellow lines) and **the incorrect topological connections** (red arrows) are drawn (as is description in the caption of Figure 4 of the main paper), while **the correct topological connections** are **omitted**, which is why the results of (b,c) look away from GT. - You can find that **all** the arrows in the figure are red (wrong), but it is **obviously impossible** that all connections are wrong (red), so please believe that we **deliberately visualized it this way** (although it looks stupid now), rather than **the result itself being bad**. In this perspective, what the figure shows is that the similarity module can further reduce **false predictions**, rather than **all predictions**. - Thank you very much for pointing out this issue. We will definitely add the correct predictions in the figure in the final version (using other color, e.g. green), but unfortunately, we **cannot upload** any pictures again during the discussion stage limited by Open Review system. If you still have doubts about the visualization results, you can also pay attention to **Figure 5** in the main paper again. We believe both the **Lane Topology** and the **Lane Graph** in it can definitely make people clearly understand the effect of our method. ---- **Q6.3:** The authors are not able to clearly explain the results of the DET. **A6.3:** It is indeed hard to provide a more clear explanation in a short time, however, compared with the little drop on DET$\_{t}$(**-1.4**), the **improvement in lane & topology** (**+13.0** TOP$_{ll}$, **+4.3** OLS, **+1.3** DET$_l$ [Topologic v.s. TopoNet]) is **more significant**. Besides, the latter is our **core research motivation and improvement goal**. Objectively speaking, compared with the **reproduced TopoNet**, our method is not inferior in terms of DET$\_t$ (**47.2** v.s. 46.9). It means that the issue lies in the reproduction of TopoNet (**46.9** v.s. 48.6) rather than our TopoLogic, which has also puzzled other researchers (e.g., the reproduction of TopoNet has lower DET$\_{t}$ (**44.5** v.s. 48.6) in Table 1 of SMERF paper). If there is still a chance, we will definitely explore this issue in depth and fix it. ---- **If you have any further questions or suggestions about our article, we'd love to discuss our content with you.**
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their valuable time and comments. In order to better response for the questions, we have attached a Rebuttal PDF that includes some figures related to Reviewer 8GTK's questions. We hope that following responses could address reviewers’ concerns. Pdf: /pdf/093acf410b0da71fed82ada481a4ab81d489d201.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SLTrain: a sparse plus low rank approach for parameter and memory efficient pretraining
Accept (poster)
Summary: In this work, the authors introduce SLTrain, a novel method for pre-training large language models (LLMs) that combines sparse and low-rank matrix structures to enhance parameter and memory efficiency. The low-rank component is learned via matrix factorization, while the sparse component is achieved by uniformly selecting the sparsity support at random and learning only the non-zero entries with the fixed support. This method significantly reduces the memory requirement for LLM pre-training while achieving on-par performance. Strengths: 1. This paper introduce SLTrain, serving as the first method to pre-train LLMs with sparse + low rank matrices. 2. The authors provide extensive experimental results demonstrating the effectiveness of SLTrain across various model sizes, from 60M to 7B parameters. The comparison with state-of-the-art methods such as ReLoRA and GaLore is thorough and highlights SLTrain's advantages in memory reduction and parameter efficiency. 3. The authors provide a solid theoretical motivation for combining sparse and low-rank components. The empirical analysis of singular values and the distribution of residuals supports the feasibility and effectiveness of this approach. Weaknesses: 1. As the model size scales up, the perplexity score gap between SLTrain and Full-Rank increases, whereas GaLore maintains a more consistent performance at scale. Consequently, the reduction in parameters with SLTrain leads to suboptimal performance and potential scalability challenges. 2. From Table 3, we observe that the memory efficiency of SLTrain does not translate into faster training speeds. This is due to the inclusion of sparse components during training. Additionally, I am curious about its efficiency during inference. For inference, we have two options: using sparse and low-rank matrices, which should reduce memory usage but increase inference time, or using a larger dense weight matrix, similar to full-rank inference. Therefore, I am interested in the memory and time requirements for both inference modes. 3. I am curious about the potential of using structural sparse patterns, such as butterfly matrices [1], to enhance training performance and efficiency. These structural sparse matrices can be computed in a more efficient way and has better expressivity. Reference: [1] Dao, Tri, et al. "Pixelated butterfly: Simple and efficient sparse training for neural network models." arXiv preprint arXiv:2112.00029 (2021). Technical Quality: 4 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the strengths of our work and providing many constructive feedback. **1. (W1) As the model size scales up, the perplexity score gap between SLTrain and Full-Rank increases, whereas GaLore maintains a more consistent performance at scale. Consequently, the reduction in parameters with SLTrain leads to suboptimal performance and potential scalability challenges.** In Table 2, the score gaps between SLTrain and Full-Rank are 0.09, 1.68, 0.62, and 0.58 for 60M, 130M, 350M, and 1B models, respectfully. The corresponding difference between GaLore and Full-Rank scores are 0.82, 1.00, 0.15, and 0.08, respectively. Hence, for both SLTrain and GaLore, the difference from Full-Rank model increases from 60M to 130M and subsequently decreases. This suggests that the general trend is similar for SLTrain and GaLore. Hence, we respectfully disagree with the reviewer's comment. Notably, the gap between SLTrain and full-rank training can be further reduced by increasing the sparsity level from the outset or by continual pretraining with an additional sparse factor. This approach does not introduce significant memory overhead but enhances performance to levels comparable with full-rank training. Evidence for this is presented in Table 5 of the main paper, where increased rank/$\delta$ corresponds to improved performance. To validate this claim on larger models, we conducted additional experiments, increasing the sparsity $\delta$ from 0.03 to 0.05 for training LLaMA models with 350M and 1B parameters. The results, detailed in Table 2 of the supplementary one-page PDF, demonstrate that increasing $\delta$ to 0.05 reduces the performance gap while maintaining memory efficiency relative to full-rank training. Furthermore, we highlight that larger models allow for a greater increase in $\delta$ due to the more significant memory gap. **2. (W2) Comparison of low-rank plus sparse and dense matrix during inference in terms of memory and time.** In Table 1 of the one-page PDF, we have included a comparison of SLTrain and full-rank in terms of inference memory and runtime on LLaMA 130M up to 7B (with the same configuration as in the main paper). We can explicitly observe the trade-off in terms of memory and computation. In particular, as the model size increases, the percentage of memory savings becomes more pronounced, while the computational cost increases is less obvious. Lastly, we would like to highlight that the computational efficiency of SLTrain highly depends on the implementation as well as the associated hardware. Traditional GPUs are usually poorly-designed for unstructured matrix multiplication. Thus to appropriately leverage the GPU power, we first compute $W = BA +S$ and then multiply by the input with dense operations. It should, however, be noted that we only declare a temporary variable for storing $BA + S$ for each linear layer and when it is executed, it will be freed and be replaced by $BA + S$ in the subsequent linear layer. This only results in a slight increase in memory compared to low-rank parameterization. On the other hand, this is still noticeably smaller in memory compared to full-rank parameterization (such as GaLore) where weights of all layers simultaneously occupy memory. This is a strategy that balances memory as well as computation. However, we believe we can match the computational efficiency of the dense model by exploiting many recently introduced sparsity-friendly hardware, such as [1]. Such a hardware is known to match the computational efficiency even for unstructured sparsity. [1] Thangarasa, V., Saxena, S., Gupta, A., and Lie, S. Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency. In *ICML 2024*. **3. (W3) Potential of using structural sparse patterns.** Thank you for your suggestion. We also believe this is interesting to explore and have already discussed this possibility in the concluding remarks in the main paper. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your rebuttal. Regarding W1, I still observe a non-negligible performance gap between SLTrain and Full Rank pre-training. In Table 2 of the PDF, the perplexity is slightly higher for SLTrain compared to Full Rank, while the memory usage is slightly lower. Given this, I believe that Full Rank pre-training might still be preferred for its better performance. I would expect SLTrain to achieve a similar performance to be truly impressive. Thank you for your responses to W2 and W3. I will maintain my current score. --- Reply to Comment 1.1.1: Title: Response to further comments Comment: Thank you for your detailed feedback. Based on your comments, it appears that the parameter efficiency of SLTrain, a central contribution of our paper (alongside memory efficiency), may not have been fully recognized. We would like to emphasize that parameter efficiency is a key aspect of our approach, as highlighted in both the abstract and introduction, and substantiated through various experiments presented in the main manuscript and supplementary material. To further substantiate our claims, we have conducted an additional experiment involving the training of LLaMA 350M with an increased $\delta = 0.1$. The results are presented in the table below, where we also include a comparison with GaLore to better illustrate the advantages of SLTrain. | | PPL| Mem | Param Size| |----------|--------|--------|--------| | Full-Rank | 18.80 |59.34G| 368M | | GaLore | 18.95 (+0.8\%) | 58.35G (-1.7\%) | 368M (-0\%) | | SLTrain ($\delta=0.05$) | 19.24 (+2.3\%) | 58.00G (-2.2\%) | 200M (-45\%) | | SLTrain ($\delta=0.1$) | 18.72 (-0.4\%) | 58.25G (-1.8\%) | 215M (-42\%) | As shown in the table, SLTrain with $\delta = 0.1$ not only matches but slightly outperforms the Full-Rank model in terms of perplexity, while requiring only 58\% of its parameter size and maintaining memory efficiency. Across all experiments, SLTrain consistently offers the best trade-off between perplexity, memory usage, and parameter size when compared to other baselines. We believe that parameter efficiency is particularly critical during post-pretraining stages. While many existing works, such as [1], focus on model pruning after pretraining, SLTrain directly trains a smaller model from the pretraining stage, effectively reducing parameter size from the outset. In summary, the results presented in the table not only demonstrate that SLTrain can achieve performance comparable to the Full-Rank model while maintaining memory efficiency, but also highlight the significant parameter savings it offers. **We hope this would lead to a re-evaluation of our work.** [1] Li, Y., Yu, Y., Zhang, Q., Liang, C., He, P., Chen, W., and Zhao, T. LoSparse: Structured compression of large language models based on low-rank and sparse approximation. In *ICML 2023*.
Summary: The submission proposes an approach to reduce memory and computational overhead in training large neural networks. It combines sparse training with low-rank adaptations to achieve efficient training without significant performance degradation. The paper includes an evaluation of SLTrain compared to full-rank, low-rank, ReLoRA, and GaLore models across multiple model sizes and tasks of the LLAMA model family. Strengths: 1. Ablation Study: The paper yields a good overview of the structure of Llama and analyzes the singular spectrum of several layer matrices, to yield a solid motivation for the sparse + low-rank trick 2. Comparative Analysis: The paper provides a detailed comparison with other methods such as full-rank, low-rank, ReLoRA, and GaLore for Llama 3. Implementation Details: The methodology is well-documented, with clear descriptions of how parameters and optimizer states are managed. Weaknesses: 1. Main concern in this work is in the paragraph starting in line 172. The authors aim for low-rank + sparse pretraining of LLMs to save memory. However, their proposed evaluation of a sparse plus low-rank layer is (AB + S)x, where + denotes a sparse matrix add. This implies that the resulting full matrix Y=AB + S is stored in memory, (if only temporarily) which defeats the purpose of a low-rank formulation. The authors mention “gpu-friendliness” as the reason for this choice, mistaking GPU throughput for real efficiency gain. - In that regard: How exactly is the “actual memory footprint” in paragraph (line 315) measured? Rigorous details are needed here to make the contribution credible. 2. The paper is very experimental without theoretical justification of the method (which itself is fine), thus an increased focus on implementation details and actual memory consumption of the implementation is expected. 3. Results are specific for Llama. The authors consider speficifally the Llama model family in this study. Given the computational effort to train LLMs, this is fine, but is should be clearly stated in the scope, which mentions general LLMs. Have the authors reasonable proof that their method, and mostly, the study of the singular spectrum extends to other LLMs? Technical Quality: 2 Clarity: 2 Questions for Authors: See above Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your feedback. We would like to take this opportunity to address your concerns and questions individually. We hope our clarifications would lead to your re-evaluation of the contribution of this work. **1. (W1) (1) $BA + S$ requires full matrix to be stored in memory (if temporarily) which defeats the purpose of low-rank formulation. (2) How exactly is the “actual memory footprint” in paragraph (line 315) measured?** (1) First, we would like to reiterate that the choice of computing $BA + S$ first before multiplying $x$ is due to sparse multiplication being poorly-supported in most GPUs, thus resulting in higher computational cost if computed separately. However, we only declare a temporary variable for storing $BA + S$ for each linear layer and when it is executed, it will be freed and be replaced by $BA + S$ in the subsequent linear layer. This only results in a slight increase in memory compared to low-rank parameterization. On the other hand, this is still noticeably smaller in memory compared to full-rank parameterization (such as GaLore) where weights of all layers simultaneously occupy memory. To verify the above claims, we perform an additional experiment, comparing the actual maximum memory consumption for the proposed SLTrain linear layer (Algorithm 1, $(BA + S)x$) and standard full-rank linear layer ($W x$) and low-rank linear layer ($BA x$) in a feedforward neural network. We include the results for both forward and backward pass where we vary the number of layers in Figure 1 of the additional one-page supplementary PDF. Specifically, we set the input, hidden and output size to be $2048$ and $r = 128$ with $\delta = 0.03$. From the figure, we observe that as the number of layers increases, the reduction in memory of SLTrain becomes more evident compared to full-rank model. On the other hand, the memory overhead of SLTrain compared to low-rank model is only marginal. In terms of computational cost, we see compared to full-rank model, SLTrain only requires slight computational overhead, which is due to the scatter-adding operation. Hence, our proposed $BA+S$ modeling and computation is memory efficient, which *does not* defeat the purpose of low-rank formulation. (2) For the second question on "actual memory footprint", we measure the max GPU memory allocated (which is the function 'torch.cuda.max\_memory\_allocated' ) during pretraining. **2. (W2) (1) No theoretical justification. (2) Details on actual memory consumption of the implementation.** (1) We provide a formal justification as follows. *Theorem.* Consider a matrix $S \in \mathbb R^{n \times n}$ with support $\mathcal{S}$ sampled uniformly at random with probability $\delta \in (0,1)$, i.e., $\mathbb P[(i,j) \in \mathcal{S}] = \delta$, for all $i, j \in [n]$. Suppose $\delta = \Omega(\log n/n) $, then with probability at least $1- \mathcal{O}(1/n)$, $BA + S$ is full rank for arbitrary randomly generated $B \in \mathbb R^{n \times r}, A \in \mathbb R^{r \times n}$ and for any $r \leq n$. This claims that while $BA$ itself is low rank and has limited expressivity, augmenting $BA$ with a uniform-support sparse matrix renders it to become full-rank. We will include this theorem in our revised paper. (2) As discussed in the previous point (W1), we measure the max GPU memory allocated (which is the function 'torch.cuda.max\_memory\_allocated') during pretraining. As part of supplementary material to the original submission, we also provide the codes that we use to perform those computations. We will include these details in the revised manuscript. **3. (W3) Does the study of singular spectrum extend to other LLMs?** Yes, we believe the observations on singular spectrum also extend to other LLMs. To validate our conjecture, we repeat the analysis of singular spectrum for pretrained Pythia 70M, downloaded from Hugging Face. Specifically, we set the rank $r = 128$ and extract the best rank $r$ approximation of the learned weight matrices. The results are shown in Figure 2 of the supplementary one-page PDF, where we observe that the residual after removing the best low-rank approximation vary smoothly and has small magnitude. --- Rebuttal Comment 1.1: Comment: Thank you for the answer and clarifying remarks to point 2) and 3). ad 1): The argument for the memory efficiency of the method is that the full weight matrix is only stored temporarily. Can the authors comment on the parallelizability constraints that this temporal full-matrix constraints bring? Specifically: The method seemingly generates the full matrix during the layer evaluation of layer $k$, then discards it again to evaluate layer $k+1$? If so, can the authors elaborate how the gradient tape is stored? It must not store the temporary (AB + S) matrix, since then one would observe similar memory footprint as the full model. Second, can the authors elaborate on the practicability of combining their approach with, e.g. pipelining, where multiple layers are evaluated in parallel? Is their method still applicable, since in this scenario the temporarily stored AB+S needs to be constructed in many layers simultaneously, potentially leading to memory spikes.
Summary: The authors propose SLTrain that performs a low-rank factorization of the weights as well as a sparse matrix of factors that represents which parameters to update. The authors show that their method can achieve significant memory savings compared to GaLore while retaining performance. Strengths: - the paper is well written and easy to read - the method is very simple to integrate to the model as randomly generating sparse factors is easy to initialize - the savings look significant compared to GaLore Weaknesses: - no strong theoretical justification why the sparse method works. - it is not clear how much sparsity is needed for a specific task. Is the degree of sparsity impactful on the performance? - what if we only parametrize the weights with BA and not BA+S, wouldn't that decrease the number of parameters? and how well will it perform? - wouldn't it be more useful if we regenerate the random sparse factors every couple of iterations to have a more effective pretraining? - how much do the results depend on the randomness of the sparse factors? can we have multiple runs and a standard deviation to see if that randomness play a big role in the results? - comparing GaLore and SLTrain is not really apple-to-apple as GaLore is based on the gradients and SLTrain is based on the weights. What if you had GaLore+SLTrain where gradients and weights are both made more efficient? Technical Quality: 3 Clarity: 3 Questions for Authors: Please address the weaknesses above Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive and constructive comments on our work. **1. (W1) Lack of theoretical justification.** We motivate the low-rank plus sparse modelling from the empirical observations (Figure 2 in the main text and Figure 2 in the one-page supplementary PDF) that the pretrained weights of LLMs can be well-approximated by a low-rank and a (uniformly) sparse factor. To further justify the modelling, we provide a formal justification as follows. *Theorem.* Consider a matrix $S \in \mathbb R^{n \times n}$ with support $\mathcal{S}$ sampled uniformly at random with probability $\delta \in (0,1)$, i.e., $\mathbb P[(i,j) \in \mathcal{S}] = \delta$, for all $i, j \in [n]$. Suppose $\delta = \Omega(\log n/n) $, then with probability at least $1- \mathcal{O}(1/n)$, $BA + S$ is full rank for arbitrary randomly generated $B \in \mathbb R^{n \times r}, A \in \mathbb R^{r \times n}$ and for any $r \leq n$. This suggests that although $BA$ itself is low-rank, which has limited expressivity, $BA + S$ is full-rank with high probability as long as the support is selected uniformly at random with sufficient non-zero entries. We will include this theorem in our revised draft. **2. (W2) Is the degree of sparsity impactful on the performance?** Yes, the degree of sparsity would impact the performance. We have already shown such results in Table 5 in the main paper where we vary the degree of sparsity ($\delta$). A trade-off exists between the performance and memory consumption, where more parameters (higher memory consumption) in general correspond to better performance. **3. (W3) What if we only parametrize the weights with BA and not BA+S, wouldn't that decrease the number of parameters? and how well will it perform?** The low-rank only approach ($BA$) does not work well in pretraining, e.g., already noted in [59]. In fact that is the motivation for our work (we mention this in Lines 64-65) and that full-rankness is required for effective pretraining. Indeed, $BA+S$ parameterization leads to full-rank weights (as our above theorem suggests) and consistently outperforms $BA$ in scores. See the comparison in Table 2 of main paper, where the baseline "Low-Rank" refers to $BA$ parameterization. **4. (W4) More useful when random sparse factors are regenerated.** We are not sure if regenerating random sparse factors would be useful. Our experiments show that learning of the sparse component $S$ with a fixed-random support is as useful as learning the low-rank part. If we were to regenerate the mask every couple of iterations, then it would mean that we are ignoring the factor learnt $S$ till now and only leveraging the learnt $BA$, which may lead to a decrease in performance. Furthermore, our experiments in Figure 4 suggest that the results remain almost the same with different uniform masks. **5. (W5) How much do the results depend on the randomness of the sparse factors? can we have multiple runs and a standard deviation to see if that randomness play a big role in the results?** We have already done this. We have validated the influence of randomness for 60M and 130M model by running the models 5 times and presented the result in Figure 4 of the main paper that the randomness does not significantly affect the performance of the two models. In particular, the perplexity for 60M model at 1.1B is 33.91 with a standard deviation of 0.18 and the perplexity for 130M model at 2.2B is 26.01 with a standard deviation of 0.10. **6. (W6) Comparing GaLore and SLTrain is not really apple-to-apple as GaLore is based on the gradients and SLTrain is based on the weights. What if you had GaLore+SLTrain where gradients and weights are both made more efficient?.** First, we highlight that both GaLore and SLTrain aim to do memory efficient pretraining but using different strategies. Therefore, it is not unfair to compare them. On your suggestion about GaLore+SLTrain, indeed, we have already discussed this possibility explicitly in Line 217, given our proposed strategy is orthogonal to the development of GaLore. However, we believe that this is out of scope for the current submission. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and for addressing most of my concerns. I have increased my score by 1.
null
null
Rebuttal 1: Rebuttal: Dear Reviewers and ACs We sincerely appreciate the time and effort you have invested in managing our submitted paper. We are especially grateful for your constructive and thoughtful feedback. In response to your comments, we have provided a formal justification and we have also included a *one-page supplementary PDF* that contains the following: - Figure 1: Comparison of memory and runtime between full-rank linear layer and SLTrain linear layer - Table 1: Comparison of inference memory and runtime between full-rank and SLTrain on LLaMA models - Figure 2: Illustration of pretrained weight decomposition of Pythia 70M model. - Table 2: Perplexity and actual max memory allocated for pretraining LLaMA 350M and 1B models with increased $\delta = 0.05$. - Table 3: Perplexity comparisons of pretraining LLaMA 7B model to 5.2B training tokens. We hope that our responses and additional experimental results adequately address all your questions and concerns. If there are any additional areas that require further clarification or improvement, we would be more than willing to make the necessary adjustments. Regards Authors Pdf: /pdf/21b8872033545895a0c1a947988c0107e271f253.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization
Accept (poster)
Summary: In this work, the authors propose finetuning the noise that a one-step diffusion model predicts a clean image from, with respect to an ensemble of preference constraints. Because only a one-step diffusion model is used, optimizing the noise is fast in terms of number of steps (and therefore wall clock time). The authors demonstrate that the performance of this optimized one-step text-to-image model is comparable to diffusion models that utilize multiple levels of denoising on preference benchmarks. Strengths: The strength of this paper is that the approach is intuitive, and the success of the approach is reasonable. **Originality:** The paper does not appear to be very novel. Optimizing the initial source noise has been explored before, and the authors are reusing existing preference classifiers/constraints. The only ``novel" component seems to be applying it to a one-step diffusion model, which may not actually be a strength (see weaknesses), which seems more as a simplified edge case, as well as utilizing multiple constraints in a weighted fashion which has limited originality. **Quality:** The paper quality is not particularly high. The organization of the paper is rather messy. The evaluations are also insufficient to demonstrate the benefits of their proposed approach. **Clarity:** The clarity could be improved. For example, Equation 7 was referenced before it was even written. Furthermore, the background sections were structured strangely. The section of "Background: One-Step Diffusion Models" spends most of its passage discussing regular Diffusion Models instead; one-step diffusion models were only mentioned in two sentences (lines 116-117). Furthermore, Section 2 - titled "ReNO", reads almost as a background/related works section (most clearly demonstrated in lines 118-125), but ReNO is only introduced two pages later (approx line 175). Then, there is a separate Related Works section. The paper could benefit from severe reorganization to improve its clarity. **Significance:** In this reviewer's perspective, this paper has limited significance. The results seem expected, and do not provide deep insight (they essentially verify that optimization of the the initial noise helps performance), and the approach is only limited to one-step diffusion models and not the diffusion modeling in general. Furthermore, the reliance on an ensemble means that this approach does not work on a singular specific preference of interest, but only on the average of multiple. Weaknesses: First and foremost, the comparisons showcased within this paper are flawed. In Table 2 and Table 3, the authors compare their approach against the performance of default text-to-image models. It should be almost expected that the performance of ReNO, by virtue of task-specific optimization, **should** have improved performance over default text-to-image models. This is therefore not an interesting comparison; in fact, it even raises suspicions on the approach in the cases that it does not outperform default models (e.g. Attribute Binding in Dall-E 3). The authors should instead compare their optimized approach against other optimized approach; for example, DOODL, or DOODL modified to be one-step, the approach from Samuel et al., etc. This provides a clearer picture of ReNO as an optimization scheme compared to other optimization schemes in tackling preference-respecting generation. The comparison to DPO is a nice small start, but we need more such comparisons - across benchmarks, and across more task/metrics. It is really strange that the authors do not compare to the works they state are the most related to their approach. The other weakness of this approach is that it is limited to one-step diffusion models. As the authors state, "backpropagating the gradient through multiple denoising steps can lead to exploding/vanishing gradients, rendering the optimization process unstable." They instead limit themselves to using only "a distilled one-step T2I model". The authors have not solved a fundamental problem in a general way, of how to optimize the noise for general text-to-image models; instead, they only demonstrate it for a single-step T2I model. The impact of this work is therefore extremely limited; such insights can clearly not generalize to arbitrary diffusion models; and the authors **have not solved** a key issue of the approach of noise optimization but just ignored it entirely in favor of a more limited problem setting where the results they achieved is rather expected. The method also seems to rely heavily on an ensemble. However, this makes it rather ungeneralizable; under this approach it is not possible to optimize the text-to-image model with respect to one specific, particular preference. Instead, the reliance on multiple reward functions simultaneously implicitly means that the resulting policy will balance between each of the functions used in the ensemble. If there were a novel preference that it were important to optimize a text-to-image model to respect, this approach would not be able to be applied successfully. This reviewer found that this work lacks severely in insight - indeed, many of the purported results are completely to be expected. The authors repeatedly tout the fast training benefits of ReNO, but really this boils down to it being forced to work with a one-step diffusion model, which makes the fast optimization a rather expected result. The computational cost was not improved in any way by the authors by their approach, it comes for free due to them restricting their **choice** of diffusion model to a one-step one. Therefore, it is completely unsurprising that using a one-step diffusion model would result in faster optimization than multi-step ones; there is no new insight to be gained here. Furthermore, the user study was conducted for SD-Turbo + ReNO against default models like SD-Turbo without any optimization; of course, there should be an improvement! All these results have verified is that optimizing the noise helps, but this is to be expected - it is completely unsurprising. Furthermore, a severe weakness of ReNO is that once optimized, there is no diversity in the output - particularly because the approach is one-step. For multi-step diffusion, even with the initial noise being optimized, the resulting output has variability because the other denoising steps have stochasticity (of resampling the Gaussian noise). However, one-step diffusion has no diversity for a fixed (optimized) noise. Even though it may be cheaper in wall-clock time to optimize one one-step image, if a batch of $n$ images were to be generated ReNO would need $n$ times the proposed time cost for one image. On the other hand, Samuel et. al only optimize the initial generation point for multi-step diffusion, enabling a result that still has diversity while still respecting preference. The speed is by default 1-4 minutes for Samuel et . al which is comparable or even preferable to ReNO, particularly because it can generate a batch all at once whereas ReNO needs to be reoptimized for each image which may take longer. Also, Samuel et. al reports optimization improvements that reduce it from minutes to seconds, which makes it strictly better than RENO which takes seconds to just generate one image that has no diversity. Samuel et. al state in Section 6.1 that it takes 1-5 minutes to adapt to the new concept and 1-2 seconds to generate new semantically correct images. The authors do not supply new criteria, nor do they innovate the approach of optimizing the noise. Instead, they simply apply multiple existing criteria simultaneously. Furthermore, the approach is limited to one-step diffusion models, which inhibits its generality. Ultimately, the scope of this work feels more like a workshop paper rather than a conference-level paper. Technical Quality: 2 Clarity: 2 Questions for Authors: Are there ways to optimize multiple noises through ReNO to generate a batch of preference-respecting images without essentially optimizing each noise separately? What are the ways ReNO can enable or address the backpropagation through multiple denoising levels, and provide interesting implications for general multi-step diffusion models? Are there ways this can be demonstrated in the rebuttal? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: This reviewer foresees no substantial potential negative societal impact from this work. However, in the Limitations section, the authors state that they hypothesize that the reward models may be limiting, and that stronger reward models and preference data may be crucial in enhancing results further. This reviewer feels like this hypothesis could be directly tested within the scope of this work; building off of the initial results of Table 1, further study could be performed on utilizing subsets of the complete set of reward functions to evaluate the benefits of each. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to clarify the contributions of ReNO to ensure the problems ReNO is aiming to solve are understood. We tackle the question of whether we can **generally** enhance T2I models *without any fine-tuning* at test time. We propose to tackle this through the Noise Optimization framework by considering *human-preference reward models* as the optimization criterion. This is not a task-specific framework, nor are we proposing a new general method for noise optimization. Instead, we are specifically aiming to generally enhance a trained model at test time through our approach. This has not yet been considered or discussed in previous work, and how this generally performs was unclear. Additionally, it poses challenges that we propose to tackle through the use of one-step models (for computational efficiency), the use of multiple reward models (to prevent reward hacking), and by using noise regularization (for the noise to stay in distribution). We thoroughly evaluate ReNO across four one-step T2I models (i.e SD-Turbo, SDXL-Turbo, Pixart-$\alpha$-DMD, and HyperSDXL), across 3 benchmarks (T2I-Compbench, GenEval, Parti-Prompts), human preference evaluations (nearly 10k comparisons), and now also the diversity of the outputs. Our results show generally enhanced performance **without** optimizing for any specific task. For example, SD-Turbo is elevated to prompt-following levels close to DALLE-3 and is significantly preferred over 50-step SDXL, while HyperSDXL + ReNO is preferred over SD3 (8B). This substantial performance increase is a major contribution of ReNO, and we argue it is a **very significant** finding. It elucidates the importance of initially sampled noise and demonstrates its effective manipulation within a reasonable timeframe. We believe it is fascinating and unexpected that such a lightweight approach can significantly enhance the performance of image generation. This also differentiates ReNO from previous noise optimization work by showcasing the framework's power in a new, more general setting. Lastly, it motivates further research into understanding and controlling initial noise and provides a novel way of benchmarking future T2I reward models. > ***Comparison to SeedSelect (Samuel et al.)*** SeedSelect tackles rare-concept generation, a fundamentally different problem from the one ReNO is solving. I.e. the goal of SeedSelect is, given 3-5 reference images, to generate images of rare concepts represented in these images. They propose to tackle this based on the noise optimization framework and reduce computational time, as mentioned in our related work section: "To mitigate this, Samuel et al. [64] propose a bootstrap-based method to increase the efficiency of generating a batch of images. However, this method is limited to settings where the goal is to generate samples including a concept jointly represented by a set of input images". On the other hand, we consider a more general setting, where each image is generated separately, given noise and text as input. Therefore, adapting SeedSelect to the general setting considered in our paper is not straightforward. > ***The authors demonstrate that the performance of this optimized one-step text-to-image model is comparable to diffusion models that utilize multiple levels of denoising on preference benchmarks.*** We disagree with this statement. We clearly show across multiple benchmarks and user studies that ReNO-optimized one-step models significantly outperform their corresponding multi-step models in all benchmarks and even their next-generation multi-step ones (See Tables 2, 3 & 5 and Figures 4 & 5). > ***Optimizing the initial source noise has been explored before, and the authors are reusing existing preference classifiers/constraint.*** As mentioned above, while we are repurposing existing reward models, as far as we are aware, the optimization criterion of human-preference reward models for noise optimization has not been considered or discussed before. > ***The results seem expected, and do not provide deep insight (they essentially verify that optimization of the the initial noise helps performance)*** We would argue that the magnitude of the performance increase makes it a **very significant** insight that is not expected at all. One-step models are known to have significantly worse visual quality and prompt following than the base multi-step models, therefore it is not expected that purely optimizing the initial noise would make them outperform even next generation multi-step models. Further, increasing the diversity of the generated images compared to one-step models is also unexpected and a valuable finding as less diversity is also a common weakness of one-step models. > ***First and foremost, the comparisons showcased within this paper are flawed. In Table 2 and Table 3, the authors compare their approach against the performance of default text-to-image models. It should be almost expected that the performance of ReNO, by virtue of task-specific optimization, **should** have improved performance over default text-to-image models.*** As mentioned, we are not doing any task-specific optimization. We are proposing one optimization that **generally** improves a T2I model, and we thoroughly evaluate each ReNO-enhanced model over different benchmarks as well as human evaluation. > ***This is therefore not an interesting comparison; in fact, it even raises suspicions on the approach in the cases that it does not outperform default models (e.g. Attribute Binding in Dall-E 3).*** We respectfully disagree. Pushing SD-Turbo to a performance close to DALL-E 3 just by changing the initial noise is not expected and is actually a very interesting comparison as also acknowledged by Reviewer UuFQ. --- Rebuttal 2: Title: Rebuttal by Authors [2/2] Comment: > ***the authors **have not solved** a key issue of the approach of noise optimization but just ignored it entirely in favor of a more limited problem setting where the results they achieved is rather expected.*** While we agree that solving the issue of computational efficiency for multi-step generation with noise optimization is a very interesting research question, we never claim to solve this problem generally with ReNO. We sidestep this challenge by considering one-step diffusion models. Note that this is still a major insight because applying noise optimization in the form of DOODL/D-Flow is not practical for general T2I generation, as mentioned in the global rebuttal. > ***Furthermore, the reliance on an ensemble [...] ungeneralizable [...]*** We report the performance of all the combinations of reward models in Table 1 & 7. As can be seen, also without an ensemble with only HPSv2 (better image quality) or ImageReward (better prompt following), ReNO achieves significant performance improvements. We are unsure if we understand why this would make ReNO not generalizable. In the general setting we are considering, the goal is no one specific preference but a general improvement of the used model. > ***If there were a novel preference that it were important to optimize a text-to-image model to respect, this approach would not be able to be applied successfully.*** On the contrary, our approach is designed to be flexible and adaptable to various objectives. There's no inherent limitation preventing the application of ReNO to novel preferences or optimization goals. In fact, we demonstrate in Figure 3 how "personalized" objectives can be effectively incorporated. With just 10 optimization iterations, we show significant increases in specific color attributes (redness/blueness) of generated images. > ***Furthermore, the user study was conducted for SD-Turbo + ReNO against default models like SD-Turbo without any optimization; of course, there should be an improvement!*** In the user study, we specifically compare SD-Turbo + ReNO against SDXL-Turbo, SD2.1 (50-step), and SDXL (50-step). All of these models are usually preferred over SD-Turbo. SD-Turbo + ReNO is significantly preferred over all of them, even the next-generation SDXL with 50 steps! Additionally, HyperSDXL + ReNO is preferred over SD3 (8B). > ***Are there ways to optimize multiple noises through ReNO to generate a batch of preference-respecting images without essentially optimizing each noise separately?*** Theoretically, this could also be incorporated into ReNO as a further Criterion function. However, the problem we are tackling is general T2I generation, which is by default only per single prompt and noise. Thus, this is out of the scope of our work but an interesting future direction. > ***What are the ways ReNO can enable or address the backpropagation through multiple denoising levels, and provide interesting implications for general multi-step diffusion models? Are there ways this can be demonstrated in the rebuttal?*** While we agree that solving the challenges of noise optimization for multi-step generation is a very interesting research question, we never claim to solve this problem generally with ReNO, and it is also out of the scope of this work. We show how the noise optimization network can be leveraged to effectively arrive at a **very significantly** better model. We leave how to best adapt our findings to multi-step diffusion models to future work. > ***Equation 7 was referenced before it was even written*** Thanks for this pointer, it was supposed to refer to Equation 4 and we will update it in the paper accordingly. > ***one-step diffusion models were only mentioned in two sentences (lines 116-117).*** The paragraph [lines 112-125] completely serves to introduce the different one-step diffusion models we employ in this work and how they work in general. > ***reads almost as a background/related works section (most clearly demonstrated in lines 118-125)*** In this part, we introduce the four different one-step models we use to benchmark ReNO. We consider this as background for ReNO, as the section is also titled. > ***Then, there is a separate Related Works section. The paper could benefit from severe reorganization to improve its clarity.*** Thank you for this comment; we will consider how to improve the clarity. With the separate related work section after the introduction of ReNO we aim to contextualize ReNO within the scope of all related work. Which part of the way we introduce ReNO exactly was unclear? We would be happy to incorporate any specific suggestions to enhance the clarity. --- Rebuttal Comment 2.1: Title: Reviewer Response [1] Comment: This reviewer appreciates the detailed rebuttal, and provides thoughts in response: This reviewer understands that exploring the optimization of the noise vector is the main problem ReNO is aiming to solve. A limitation is that, the analysis seems only to hold for one-step models. It is not obvious that optimizing the noise is generally useful for diffusion modeling; especially since over multiple timesteps, the effect of each particular noise sample is minimized - potentially including the initial one. The impact of this work is therefore severely limited. To that note, it is not particularly impressive if by default, one-step T2I models are capped in terms of modeling capability. If even with the best optimization, T2I models cannot outperform multistep diffusion models without optimization (e.g. DALL-E 3), then one-step T2I models are essentially a dead-end, barring amazing new developments in the space. This reviewer therefore disagrees that this is a particularly significant finding. The purported importance of initially sampled noise appears only to hold for this limited, toy example and is not supported for diffusion models in general. The authors do not provide real insights into how to perform noise optimization over multiple timesteps, nor can they provide real insights into noise optimization for the general case of (potentially multi-step) diffusion models as the usefulness of the source noise is then in question. In Tables 2 it is pretty apparent that DALL-E 3 outperforms ReNO, while being a multi-step denoising diffusion model. There is some confusion on why the authors disagree with the initial statement, and suggests that “ReNO-optimized one-step models significantly outperform their corresponding multi-step models in all benchmarks”. Furthermore, Table 5 doesn’t seem relevant to the discussion. This reviewer would like to clarify that when the authors state that they propose an optimization that “generally improves a T2I model”, they are really referring to an optimization that improves a one-step T2I model only. It is not obvious that such benefits extend to the multi-step case. --- Rebuttal 3: Comment: We thank the reviewer for engaging with our rebuttal. We would like to first clarify our motivation for approaching this work: Our goal was to obtain the best possible/extremely high-quality text-to-image generation using open-source models and tools. To this end, models such as the 50-step SD2.1/SDXL models would be the first choice. However, the shortcomings of these models are well-documented and are further corroborated in our paper. Alternately, one could train bigger models on larger datasets with higher-quality data (as is the trend with SD2.1 -> SDXL -> SD3). Unfortunately, these require large-scale GPU resources unavailable to most research groups (e.g. DALL-E 2 itself uses 40000+ A100 GPU days) and additionally, current SOTA models are paid services and not open source (DALLE-3, SD3 (8B)). Therefore, test-time optimization methods are a compelling alternative to enhance the generation quality of existing multi-step T2I models. These methods (e.g. DOODL) work with multi-step models and enhance the quality of the generated images. However, not only are they computationally expensive, they also provide limited improvements in the metrics that they optimize for(on average CLIPScore increases by 0.03 with DOODL on 50 step SD2.1). Our hypothesis was that the lack of effective optimization was due to exploding gradients and other challenges of optimizing multi-step diffusion models. Therefore, we made the unconventional decision of optimizing the initial noise of a one-step model. Apriori, it was unclear if these models could even match the corresponding multi-step model even after noise optimization, let alone surpass them. To our surprise, not only was it 60x faster to optimize, but the gains in CLIPScore were 4x (0.12) that were obtained from optimizing multi-step models (e.g., DOODL). Enhancing this further, we incorporated other models that could provide complementary signals to improve both visual quality (e.g., HPSv2) and prompt following (e.g., ImageReward). As a result, we obtained image generation results that were the highest reported results among any open-source method on T2I-Compbench and GenEval. Further, even for the same time that a 50-step SDXL takes for generation, we show better prompt following by performing noise optimization with SD-Turbo (Fig 5). We believe that providing a recipe to enhance the quality of text-to-image generation (noise optimization of the best-distilled models) in a cost-effective manner is our key contribution. > ***In Tables 2 it is pretty apparent that DALL-E 3 outperforms ReNO, while being a multi-step denoising diffusion model. There is some confusion on why the authors disagree with the initial statement, and suggests that “ReNO-optimized one-step models significantly outperform their corresponding multi-step models in all benchmarks”. [...] In fact, it is troubling that even with optimization it does not outperform DALL-E 3;*** Our finding is for one-step models and their **corresponding** multi-step models. SD-Turbo is based on SD2.1 with an 800M U-Net and a 336M params CLIP ViT-H text encoder, which makes it the multi-step model to compare to. SD-Turbo and DALLE-3 are models trained with very different resources and architectures. While specific details for DALLE-3 are not available, e.g. SD3 leverages a DiT of 8B params with a T5-XXL (4B params) text encoder. We show that in all experiments conducted, SD-Turbo outperforms 50-step SD2.1 and even 50-step SDXL, which is the next-generation multi-step model. Based on our findings, a one-step model based on SD3/DALLE-3, like SD3-Turbo, enhanced with ReNO should outperform multi-step SD3/DALLE-3. Unfortunately, these models are proprietary, and thus, we could not benchmark with SD3-Turbo. To summarize, DALLE-3 and SD3 are "two generations" after SD-Turbo and thus, do not constitute a fair comparison between one-step and multi-step models. Similarly, a method improving LLaMa2-7B isn't expected to outperform GPT-4 to be a meaningful research contribution. > ***Regarding multiple reward models: it still appears that ReNO depends on an ensemble of reward models. And that performance does not increase significantly without the ensemble. Relying on an ensemble suggests that this method may not perform well for a specific, novel criteria (in situations where other existing criteria in the ensemble are not as important)*** As mentioned in our previous answer, ReNO can be flexibly used based on given preferences. While we benchmark all current T2I reward models we are aware of, adapting this to new models is straightforward as long as the novel criteria are expressed in a differentiable function. See, for example, the color example in Figure 3, or given a new, more robust, and stronger T2I reward model, ReNO can be employed with just this reward. Table 1 does show that already a single reward model achieves significant improvements. --- Rebuttal Comment 3.1: Comment: > ***The comment remains the same, that something that has explicit optimization with respect to preference intuitively should outperform something that has not been optimized whatsoever. This reviewer still believes the user study is unfair, and that it is completely intuitive and expected that an SD-Turbo + ReNO should outperform non-optimized SD-Turbo with respect to preference, because SD-Turbo+ReNO was explicitly optimized to resepct it! In fact, it is troubling that even with optimization it does not outperform DALL-E 3; intuitively these optimized techniques should always outperform non-optimized ones.*** We agree that SD-Turbo + ReNO should outperform SD-Turbo as long as the optimization was done correctly given robust T2I reward models. However, the margin of increase is not clear apriori, see e.g. the performance increase of DOODL in the author rebuttal and Table 3 in the additional PDF. Additionally, outperforming 50-step models that are 2-5x bigger with 10x the compute used for training is not at all to be expected purely from noise optimization. > ***It is strange that the authors are adamant on staying with a one-step diffusion model, refusing to even try some distilled technique that can generate samples in 2 steps or 6 steps (e.g. some LCM models).*** Even with a 2-step model, the VRAM requirement will significantly increase. Thus, e.g., 2-step HyperSDXL will not fit into 40GB anymore, making it impractical as a general image generation model. We thank the reviewer for the suggestion and agree that this is an interesting future research direction. --- Rebuttal Comment 3.2: Title: Reviewer Response [3] Comment: Surprisingly, the storytelling and motivation outlined in the general comment is more appealing and clear than what was ultimately presented in the paper. This reviewer is actually on board with the ultimate motivation of "obtain[ing] the best possible/extremely high-quality text-to-image generation using open-source models and tools". Leading it towards cheaper models and alternatives while considering memory constraints and navigating closed-source models to motivate "test-time optimization methods [as] a compelling alternative to enhance the generation quality of existing multi-step T2I models". And then going from test-time optimization of large multi-step models to a single one. Focusing on numerical insights about expensiveness of existing models (e.g. their RAM) would be compelling - a numbers-focused argument would be welcomed. This story reads much better than the one that was initially provided; where the central focus seems to be about T2I noise optimization broadly - but the only thing demonstrated was for one-step T2I models. And that mismatch is a big source of motivating confusion because the findings for one-step T2I models do not translate necessarily to general source-noise optimization insights for T2I models. Currently there also seems to be a big focus on preference optimization (through an ensemble), which distracts from the overall goal of simply "improving generation quality of existing T2I models" which everyone can appreciate. If the storytelling were structured like the roadmap provided above from the onset, this reviewer would appreciate the work and its scope much better. This reviewer actually strongly encourages the authors to write their paper (esp their Introduction, Abstract, etc.) with this motivating story in mind - to avoid any potential confusion and disappointment in what the authors really seek to tackle. The focus is *not* on source noise optimization because its implications do not extend (currently) to general diffusion models - the source noise optimization is simply a mechanism for improving test-time generation quality. The one-step diffusion model is *not* used to sidestep multi-step diffusion models (which raises suspicions of the authors avoiding the worthwhile interesting questions, and raises concerns that their insights are not generally useful or applicable across all T2I models), but a choice made for tractability. This presentation would allow the reviewer to actually appreciate the results rather than be focused on the one-step T2I choice from the get-go and be disappointed at the lack of useful general insights. This reviewer appreciates the proposed story here and can see the pieces fit in - but as the main paper is written currently, the reviewer still feels uncomfortable directly recommending acceptance. A slight increase of the score will be made out of a common understanding - but still, the reviewer would highly recommend rewriting the motivating portions of the paper (results can obviously remain the same) to fit this proposed storyline. --- Reply to Comment 3.2.1: Comment: Thank you for your insightful feedback. Your comments have helped us recognize areas where we can better articulate our existing work. We plan to refine the presentation in the Abstract, Introduction, and Section 2 to more clearly communicate our motivation and contributions. Specifically, we will emphasize: 1. Our primary goal is to maximize the performance of T2I models within significant resource constraints. 2. Test-time optimization emerges as a promising direction for this goal. However, our experiments with DOODL reveal limitations in achieving the desired balance of quality and efficiency. 3. Consequently, we focus on one-step models as a tractable starting point, introducing a novel human preference reward model based approach that leverages complementary strengths to boost overall image generation performance. For example, we plan to update the following sentence in the Abstract to better reflect this narrative: "In this work, we propose Reward-based Noise Optimization (ReNO), a novel approach that enhances T2I models at inference by optimizing the initial noise based on the signal from one or multiple human preference reward models." -> "In this work, we provide a new perspective on generally improving T2I generation through Reward-based Noise Optimization (ReNO), a novel approach that enhances one-step T2I models at inference. ReNO optimizes the initial noise based on signals from multiple human preference reward models, offering a unique solution to improve generation quality within strict computational constraints." > ***Currently there also seems to be a big focus on preference optimization (through an ensemble), which distracts from the overall goal of simply "improving generation quality of existing T2I models" which everyone can appreciate.*** Additionally, we plan to lower the emphasis on the ensemble of reward models and discuss it as a tool to enhance image generation quality in our updated manuscript. > ***The focus is not on source noise optimization because its implications do not extend (currently) to general diffusion models - the source noise optimization is simply a mechanism for improving test-time generation quality. The one-step diffusion model is not used to sidestep multi-step diffusion models (which raises suspicions of the authors avoiding the worthwhile interesting questions, and raises concerns that their insights are not generally useful or applicable across all T2I models), but a choice made for tractability.*** We agree with your assessment and will more clearly clarify in our revised manuscript that our focus is on improving test-time generation quality, with source noise optimization as a mechanism and one-step models chosen for tractability, rather than to sidestep multi-step models or avoid broader questions in noise optimization for diffusion models. We believe this revised framing will more effectively communicate the significance and broader impact of our work in the context of practical T2I model deployment and optimization.
Summary: The paper introduces Reward-based Noise Optimization (ReNO), a novel approach to enhance Text-to-Image (T2I) models at inference by optimizing the initial noise based on human preference reward models. ReNO significantly improves model performance within a computational budget of 20-50 seconds, outperforming all current open-source T2I models and being preferred almost twice as often as the popular SDXL model in user studies. Additionally, ReNO-optimized models demonstrate superior efficiency, surpassing widely-used models like SDXL and PixArt-alpha with the same computational resources. Strengths: 1. A new approach that optimizes the initial noise in T2I models at inference time using gradient ascent, which enhances model performance significantly. 2. outperforming all current open-source T2I models and being preferred almost twice as often as the popular SDXL model in user studies. Additionally, ReNO-optimized models demonstrate superior efficiency, surpassing widely-used models like SDXL and PixArt-alpha with the same computational resources. Weaknesses: Noise optimization is a bit like an "adversarial attack" that achieves its goal by adding noise to the original input, but the only drawback of the method is the time cost, as using gradient descent to obtain noise requires a lot of iterations. Therefore, optimizing time is a crucial factor that requires attention. The paper points out that the optimization time takes 20-50 seconds, and it says that one-step t2i is used. What if multi-step reasoning is directly used? Is the effect better than optimizing noise? Technical Quality: 3 Clarity: 3 Questions for Authors: if the author can significantly reduce the number of optimization iterations, it would be a good method, and the author can refer to some adversarial attack methods. I hope to see the author improve on optimizing time Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper points out that the optimization time takes 20-50 seconds, The optimization time is too long and not easy to use Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and are especially glad they emphasize the significant enhancement achieved by ReNO. Below, we address the concerns raised in the review. > ***Therefore, optimizing time is a crucial factor that requires attention. The paper points out that the optimization time takes 20-50 seconds, and it says that one-step t2i is used. What if multi-step reasoning is directly used? Is the effect better than optimizing noise?*** We would like to point out that ReNO actually provides an efficient formulation to enhance T2I models even compared to existing multi-step T2I models. For instance, in Figure 5, we show that ReNO outperforms existing open-source models such as multi-step SDXL and PixArt-$\alpha$ with the same compute budget. Even with 10-15 iterations (4-10 seconds depending on the one-step model), we see significant improvements in prompt following and visual quality, as shown in Figure 5. Additionally, ReNO is much faster than other noise optimization methods, as mentioned in the global rebuttal. > ***Noise optimization is a bit like an "adversarial attack" that achieves its goal by adding noise to the original input*** While we agree that there are connections to the literature on adversarial attacks, we would like to emphasize that the changes in the generated images are specifically not adversarial. We agree that leveraging different optimization techniques, e.g. from the adversarial attack literature, to reduce the number of iterations/time required for convergence could be an interesting future direction. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the responses. Referring to the opinions of other reviewers, I think the proposed method is not universal and time-consuming, so I keep my score. --- Reply to Comment 1.1.1: Comment: The reviewer mentions that our method is not universal and time-consuming, and points to the other reviews. It would be much appreciated if the reviewer could provide more details, since in our rebuttal to each of the other reviews, we have thoroughly clarified these points. > ***Universality of ReNO:*** In this work, we are tackling the most general form of Text-to-Image generation. ReNO-enhanced one-step models consistently surpass the performance of all current open-source Text-to-Image models across a variety of **general** T2I benchmarks and a comprehensive user study. > ***Time-consuming:*** We address this point in the rebuttal above. ReNO enhanced SD-Turbo outperforms existing open-source models such as multi-step SDXL and PixArt-$\alpha$ **with the same compute budget**. This shows that ReNO is not time-consuming but actually time-efficient.
Summary: The paper presents a novel approach called Reward-based Noise Optimization (ReNO) to enhance the performance of one-step Text-to-Image (T2I) models. ReNO optimizes the initial noise of T2I models using a human preference reward model, addressing the limitations of current T2I models in capturing complex details in compositional prompts. The method shows promising results across four different one-step models on T2I-CompBench and GenEval benchmarks, outperforming open-source T2I models and achieving comparable performance to a proprietary model. ReNO is computationally efficient and improves the quality of generated images, as demonstrated through user studies. Strengths: 1. It is reasonable to introduce a distilled one-step T2I model to address the notorious issue of exploding/vanishing gradients that exist in T2I diffusion models. 2. ReNO improves the accuracy of T2I models in capturing intricate details in complex prompts. It consistently surpasses the performance of popular open-source T2I models. 3. ReNO is applicable to existing models, avoiding the need for retraining from scratch. The approach is based on human preference, enhancing the alignment with desired outputs. Weaknesses: 1. ReNO is essentially a runtime optimization approach that leverages advanced reward models to achieve state-of-the-art text-to-image generation capabilities. Indeed, the authors opt to optimize initial noise as a learnable parameter. Could I choose to make the parameters of a UNet learnable to achieve a similar objective? 2. Have the authors attempted to apply ReNO to text-to-image diffusion models, such as SD, using techniques like gradient checkpoint and LoRA? 3. Has the proposed ReNO been tested on video generation diffusion models for text-to-video tasks? 4. The authors are encouraged to include the following references: - Guided image synthesis via initial image editing in diffusion model, ACM MM 2023 - InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise Optimization, CVPR 2024 Technical Quality: 3 Clarity: 4 Questions for Authors: Overall, ReNO presents a simple yet effective solution to enhance text-to-image diffusion models without the need for additional training. Generally, I have a positive outlook. Please refer to the Weaknesses section for a detailed list of questions and suggestions. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The authors provide Limitations and Broader Impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments. We are especially glad that they appreciated the simplicity and effectiveness of ReNO. Below, we address the concerns raised in the review. > ***ReNO is essentially a runtime optimization approach that leverages advanced reward models to achieve state-of-the-art text-to-image generation capabilities. Indeed, the authors opt to optimize initial noise as a learnable parameter. Could I choose to make the parameters of a UNet learnable to achieve a similar objective? Have the authors attempted to apply ReNO to text-to-image diffusion models, such as SD, using techniques like gradient checkpoint and LoRA?*** Yes, we also briefly tried optimizing the parameters of the U-Net with LoRA. This leads to the model generating images with visual artifacts which is caused due to "reward-hacking". In contrast, only optimizing the noise keeps the model untouched and, thus, does not lead to it generating adversarial images as long as the noise stays in distribution. Moreover, fine-tuning with reward models poses a number of other challenges. [lines 40-51] For example, the SDXL U-Net has 2.6B parameters, which makes it computationally infeasible to fine-tune at inference for every prompt. Even with LoRA and gradient checkpointing (as done by AlignProp[57]), this would still have 10M+ parameters to train (vs ~16k parameters for the noise optimization), which would take several minutes, if not hours (AlignProp fine-tunes SD1.5 on 4 GPUs for 24 hours). > ***Has the proposed ReNO been tested on video generation diffusion models for text-to-video tasks?*** Our noise optimization framework is directly applicable to video generation. However, at the moment, while there are human preference reward models for video generation, there are no one-step video generation models publicly available, which are otherwise prohibitively expensive to train in a resource-constrained setting. Once open-source one-step video models are available, ReNO would be ideally suited to further enhance the performance of video generation. > ***The authors are encouraged to include the following references*** We thank the reviewer for pointing out these related works, especially the related concurrent InitNO, and we will add these to the paper. While InitNO also looks to optimize the initial noise, this is done by computing a loss function using attention maps as opposed to optimization with human preference reward objectives.
Summary: The paper introduces Reward-based Noise Optimization (ReNO), a novel method to improve Text-to-Image (T2I) models by optimizing the initial noise during inference using human preference signals. This approach addresses the limitations of current fine-tuning methods, which often lead to "reward hacking" and poor generalization. By utilizing one-step diffusion models, ReNO enhances image quality and adherence to complex prompts without retraining, achieving significant performance improvements on benchmarks like T2I-CompBench and GenEval. Extensive user studies demonstrate that ReNO models are preferred nearly twice as often as popular models like SDXL, showcasing their efficiency and effectiveness in enhancing T2I model performance and user satisfaction. Strengths: 1. Optimizing the initial noise input during inference to improve image quality and prompt fidelity, which is an innovative angle compared to typical model fine-tuning approaches. 2. Conducts extensive experiments on multiple challenging benchmarks (T2I-CompBench, GenEval, Parti-Prompts) to evaluate the method. 3. Compares against a wide range of baselines and state-of-the-art models, including proprietary ones like DALL-E 3 and Stable Diffusion. 4. Analyzes the impact of different reward models and optimization iterations. 5. Clearly explains the motivation and approach of ReNO. 6. Demonstrates competitive performance with proprietary models like SD3, despite using smaller open-source models as a base. 7. Provides a practical method to enhance text-to-image models at inference time with reasonable computational cost (20-50 seconds per image). Weaknesses: 1. Limited analysis of potential negative impacts or failure modes: The paper does not thoroughly discuss potential downsides or risks of their approach. For example: - Could optimizing for reward models lead to unexpected or undesirable outputs in some cases? - Are there risks of amplifying biases present in the reward models? - Could this approach be misused to generate more convincing deepfakes or misleading images? 2. Limited comparison to related optimization approaches: The paper compares to some baseline models, but doesn't thoroughly compare to other test-time optimization methods for text-to-image models. Comparisons to approaches like: - DOODL (Kerras et al., 2022) - D-Flow (Ben-Hamu et al., 2023) Would help contextualize the novelty and advantages of ReNO. 3. Insufficient analysis of impact on image diversity: - The paper doesn't thoroughly examine whether optimizing for rewards reduces the diversity of generated images. Some analysis of how ReNO affects the distribution of outputs would be valuable. (Specially theoretical) Aside from the mentioned points, everything else was satisfactory, and I enjoyed the paper! Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Reward model robustness: - How sensitive is ReNO to the choice of reward models? Have you observed any cases where optimizing for certain reward models leads to unexpected or undesirable results? This could help understand the robustness and potential limitations of the approach. 2. Computational efficiency: - Could you provide more details on how ReNO's performance scales with the number of optimization steps and computational budget? Is there a clear point of diminishing returns? 3. Diversity of outputs: - Does optimizing for reward models potentially reduce the diversity of generated images? Have you conducted any analysis on how ReNO affects the distribution of outputs compared to the base models? 4. Integration with other techniques: - How might ReNO complement or interact with other techniques for improving text-to-image models, such as fine-tuning or prompt engineering? (Out of curiosity) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors acknowledge some limitations of their approach, particularly in the "Limitations" section. They mention: - Convergence of different models to similar performance levels, potentially due to limitations in reward models. - The increased VRAM requirements of their method. - Persistent challenges in generating humans, rendering text, and modeling complex compositional relations. - They briefly mention the possibility of hallucination in their method, which is a relevant concern for AI-generated content. I think they addressed the limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments. We are especially glad that they enjoyed the paper. Below, we address the concerns raised in the review. > ***Limited comparison to related optimization approaches*** We address the comparison to DOODL in the global response. For D-Flow, the proposed method takes even longer (30-40 minutes for a 350M parameter model). > ***Insufficient analysis of impact on image diversity / Diversity of outputs*** We address this in the global response. Would you like us to analyze any other metrics to measure the distribution of generated images with ReNO compared to without ReNO? > ***Limited analysis of potential negative impacts or failure modes*** ReNO can successfully enhance the general capabilities of existing T2I models, including better prompt following and visual quality. As with all T2I models, this can be used for positive and negative applications. Additionally, since the objective is to optimize for the human preference reward models, the model is biased towards aspects that these models especially focus on. Reward models could also be prone to biases because of, e.g., their training data and, thus, this could be a potential risk in amplifying biases in existing models and a source of undesirable outputs. However, ReNO also opens up the possibility of including a bias-mitigating reward model. > ***Reward model robustness*** We would like to highlight Tables 1 & 7, where we benchmark the choice of different reward models. We did observe undesirable outputs **without** the noise regularization, as sometimes images with severe artifacts are generated, because of the noise going out of distribution, that still achieve a high reward score. Additionally, the reward models focus on some aspects more (e.g., colors, counting) than others (e.g., spatial understanding in the prompt). Specifically, we found that for a prompt including "under/above" or "left/right" the reward model scores can be very similar independent of which one is chosen for the same image. > ***Computational Efficiency*** We would like to highlight Figure 5, where we show the improvement in attribute binding on T2I-Compbench with increasing iterations. We see rapid improvements for the first 10-15 iterations (4-6 seconds for SD-Turbo on one A100), where it already surpasses popular open-source multi-step models like multi-step SDXL and PixArt-$\alpha$. We support this with some qualitative examples in Figure 6. After 50 iterations (20 seconds), we see diminishing returns and see no major gains beyond 75 iterations (30 seconds). > ***Integration with other Techniques:*** Our experiments (Tables 2 & 3) indicate that we get better results with stronger base models. Therefore, fine-tuning base models to improve their performance or orthogonal techniques like prompt engineering should enhance the results further and be straightforward to incorporate together with ReNO. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concern. I think the paper is now is a good shape (especially after the new experiments). I'd like to maintain my score.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their time and their detailed and insightful comments. We appreciate their recognition of the ReNO's significant enhancement in general performance (*UuFQ*,*8y7L*,*mwA2*), and its novelty (*UuFQ*, *mwA2*), clarity (*UuFQ*, *8y7L*), and practicality (*UuFQ*, *8y7L*). We want to highlight the new experiments we conducted and some of the joint questions here while additionally we address each raised issue with individual replies for each reviewer. We repeat the major contributions of ReNO here: - Practical Noise Optimization for T2I Models: ReNO demonstrates that one-step models can make noise optimization a practical tool for generally enhancing Text-to-Image generation. It significantly improves efficiency in both time (20-50 seconds vs 20 minutes per image for previous noise optimization methods) and memory usage. This approach makes optimizing for human preference reward models during inference practically possible, substantially enhancing both prompt adherence and visual quality. - Significantly Enhanced Performance: ReNO-enhanced one-step models achieve results that are on par with much larger, better-trained closed-source models while outperforming popular open-source models by large margins on standard benchmarks. Notably, given the same computational budget, one-step models with ReNO outperform widely used multi-step models, offering superior results with enhanced efficiency. This demonstrates the effectiveness of aligning T2I outputs with human preferences during inference, even with the constraints of a one-step model, and elucidates the power of noise optimization in a novel more general setting. --- **Diversity evaluation.** We've conducted a diversity analysis in response to the reviewers' questions about ReNO's impact on diversity. We generated images with 50 different seeds for 10 prompts from each of the 11 challenges of PartiPrompts, totaling 110 prompts. Then, for each prompt, we evaluate the diversity over 50 seeds by computing the mean pairwise LPIPS and DINO score. The higher these two scores are, the less diverse the generated images across seeds. We report the mean and standard deviation across all prompts in the following table, as well as in the additional pdf. | | LPIPS | DINO | | ----------------- | -------------------- | -------------------- | | SD-Turbo | 0.382 ± 0.043 | 0.770 ± 0.101 | | SD-Turbo + ReNO | *0.246* ± 0.046 | *0.712* ± 0.132 | | SD2.1 (50-step) | **0.243** ± 0.049 | **0.623** ± 0.150 | | SDXL-Turbo | 0.391 ± 0.044 | 0.835 ± 0.073 | | SDXL-Turbo + ReNO | **0.291** ± 0.041 | *0.763* ± 0.116 | | SDXL (50-step) | *0.351* ± 0.042 | **0.700** ± 0.128 | Remarkably, ReNO actually significantly increases the diversity of one-step models. As we believe this is a significant finding, we plan to include this in the main text of the paper. We also provide some non-cherry-picked results for the first 5 seeds in the pdf. We hypothesize that the reason for this increased diversity is that ReNO optimizes the noise away from the zero mean of the noise distribution, thus creating more diverse noises compared to sampling from the standard Gaussian. Even though we regularize the noise to stay in distribution, we do not enforce this. To validate this hypothesis, we compute the standard deviation across all noises before and after ReNO-optimization. As expected, the standard deviation for the initial noise across this sample size (110 prompts * 50 seeds) is *1.0000*. In contrast, the standard deviation of ReNO-optimized noise is *1.0039*, which confirms this hypothesis. --- **Comparison to DOODL.** Due to the fact that DOODL takes more than 80 days on one A100 to evaluate on T2I-Compbench, we were unable to make comparisons to it on the standard benchmarks. The following table illustrates this point: | | sec/iter (total) | T2I-CompBench duration | VRAM | \#params | | --------------- | ---------------- | ---------------------- | ---- | -------- | | SD2.1 + DOODL (CLIP) | 24s (20min) | 83.33 A100 days | 40GB | 860M | | SD-Turbo + ReNO (only CLIP) | 0.2s (10s) | 0.63 A100 days | 10GB | 860M | | SD-Turbo + ReNO | 0.4s (20s) | 1.25 A100 days | 15GB | 860M | Furthermore, when employing a larger model such as 50-step SDXL or multiple reward models, DOODL's VRAM requirement exceeds 40GB, making it infeasible even on A100 GPUs. Additionally, since DOODL optimizes multi-step models, it encounters problems with exploding gradients, which may compromise its noise optimization efficiency compared to ReNO, despite its extended runtime. To substantiate this, we evaluated DOODL on the first 50 prompts from T2I-CompBench's three attribute binding tasks. Our analysis includes both VQA evaluation results and changes in the optimized CLIPScore, effectively measuring the efficacy of the 50-step optimization process. We compare SD2.1 + DOODL (using CLIPScore) against SD-Turbo + ReNO (using CLIPScore) and ReNO with all considered reward models. |Model|Color ↑|Shape ↑|Texture ↑|CLIPScore ↑| |---|---|---|---|---| |SD2.1|33.4|52.4|63.4|0.261| |SD2.1 + DOODL (CLIP)|38.5 (+5.1)|51.6 (-0.8)|64.6 (+1.2)|0.289 (+0.03)| |SD-Turbo|60.4|48.5|61.8|0.362| |SD-Turbo + ReNO (only CLIP)|70.1 (*+9.7*)|66.9 (*+18.4*)|79.6 (*+18.2*)|0.483 (**+0.12**)| |SD-Turbo + ReNO (all)|82.1 (**+21.7**)|77.4 (**+28.9**)|82.8 (**+21.0**)|0.437 (*+0.08*)| We observe that ReNO achieves substantially higher gains compared to DOODL, both w.r.t. the CLIP loss that we optimize for and also the independent VQA evaluation. Note that the results from DOODL are in line with those reported in their paper, where they report increases in CLIPScore by 0.026 and 0.031. To provide a more comprehensive comparison between DOODL and ReNO, we plan to include this analysis and its results in the paper's Appendix. Pdf: /pdf/7d1c9d454faa0f78fb9daeac81cfd6e3ff75e456.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SAND: Smooth imputation of sparse and noisy functional data with Transformer networks
Accept (poster)
Summary: The paper addresses the limitations of ordinary transformers for doing imputation of functional data over irregularly longitudinal data. The authors propose a novel new variant of transformer that takes derivatives into account, called SAND. Theoretically, it's shown that SAND with a certain number of hidden neurons can do functional imputation well. Then empirically, extensive experiments show that SAND perform much better than other methods. Strengths: 1. Clear writing. Everything is explained quite well, and the flow of logic is smooth. It's immediately clear to me that the authors study an important problem and do well. 2. A complete story with theories and empirical results. The paper uses a lot of math, but it's used quite properly, because the mathematical comes naturally out of the structures of the problem itself, rather than some artificial assumptions. The theoretical analysis properly justifies the design the new transformer architecture and is very neat. Then there are extensive empirical studies that prove the usefulness of the new architecture. 3. The authors study a specific class of problems with great clarity. I'm so tired of papers claiming to have one method that improves things in great generality, which usually are not evaluated completely. It's good to see that we have steady progress in developing solid tools for specific problems of certain structures. Weaknesses: I'm not seeing effective weaknesses. Technical Quality: 4 Clarity: 4 Questions for Authors: No. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes, limitations are addressed adequately and no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable and positive feedback. --- Rebuttal Comment 1.1: Title: Thank you too Comment: We thank the authors for their thanks.
Summary: This paper studies the problem of how to perform imputation of the underlying function from noisy or sparse observations with functional data. In particular, the authors present "SAND" (Self-Attention on Derivatives), a variant of the transformer architecture, by introducing $\mathrm{diff}(\cdot)$ (derivative) and $\mathrm{Intg}(\cdot)$ (integral) operators to the standard transformer, to address the imputation of sparse and noisy functional data. The authors provide theoretical guarantees for the proposed SAND method as well as empirical evaluations across various datasets demonstrate that SAND outperforms existing methods. Strengths: - The proposed method with the two ingredients, $\mathrm{diff}(\cdot)$ and $\mathrm{Intg}(\cdot)$, in the modeling part are well-motivated for tackling the smoothness issue. - Empirically the paper demonstrates that the proposed SAND transformer can achieve strong performance on both synthetic and real-world datasets. Weaknesses: - The theoretical results seem only justify the proposed architecture can approximate the FPCA, which do not provide insights on why this new transformer variant is better than the standard transformer for solving the imputation problem. - The design of the $\mathrm{diff}(\cdot)$ operator is not very intuitive, since the operator itself does not align well with the derivative operator. - I would suggest add a simple baseline (standard transformer with some smoothing post-processing) for comparison. For example, applying certain smoothing technical on top of the output of standard transformers, e.g., locally adaptive regression splines (LARS, Mammen & van de Geer, 1997), trend filtering (Tibshirani, 2014). [Mammen & van de Geer, 1997] Enno Mammen and Sara van de Geer. Locally adaptive regression splines. The Annals of Statistics, 25(1):387–413, 1997. [Tibshirani, 2014] Ryan J Tibshirani. Adaptive piecewise polynomial estimation via trend filtering. The Annals of Statistics, 42(1):285–323, 2014. I am happy to raise my score if some of the questions/weaknesses are addressed during rebuttal and discussion. Technical Quality: 3 Clarity: 3 Questions for Authors: - The notation in Section 3.1 and Section 3.2 is a bit unclear to me. For example, the bolded tilde $T$ and $T_c$ are $(1+p)\times M$ matrices, I guess there are positional embeddings inside bolded tilde $T$ and $T_c$. However, these are not formally defined. I would suggest add a notation paragraph to clarify the notation somewhere (could be in appendix in the main body has limited space). - In Line 184, why subtracting the first element (or why constructing the tilde $T_c$)? I guess it is because the $\mathrm{diff}(\cdot)$ and $\mathrm{Intg}(\cdot)$ operators. Would removing this shift term affect the performance of SAND? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable and positive feedback, which we have used to strengthen our paper. Please see our responses below. ### [Weakness 1] **Answer**: Thank you for the comment. We acknowledge that most of our theorems focus on SAND. However, our theorem does reveal one reason that SAND is better than the standard Transformer: See Corollary 1 and lines 223 to 226 of the manuscript, which explain how SAND effectively separates the smooth underlying process from random noise by limiting hidden nodes in the network. The standard Transformer is incapable of doing so due to its universal approximation property and the existence of noise in the training data (lines 149 to 154 in the manuscript). When the standard Transformer encounters noisy testing data, they often continue producing similar zigzag patterns, as they have learned to replicate the noise from the training phase. ### [Weakness 2] **Answer**: The traditional numerical derivative operator takes the form $f^\prime(a) \approx \frac{1}{b - a}[f(b) - f(a)]$, a linear combination of the inputs $f(a)$ and $f(b)$ with fixed weights (line 195). Our $\rm{diff}(\cdot)$ operator similarly calculates a weighted sum of these inputs, with weights learned by the network. Our decision to design the $\rm{diff}(\cdot)$ operator to resemble the attention module was driven by two main considerations: - **Cohesion with the Transformer Architecture**: Our SAND module is a simple augmentation of the standard Transformer. SAND modifies the conventional attention module by omitting the softmax operation, thereby aligning its computational complexity with that of the original attention mechanism, as detailed in Table 1 of Vaswani et al. (2017). Compared to recurrent and convolutional networks (other architectural techniques for achieving smoothing), SAND enjoys major computational advantages (Vaswani et al., 2017). - **Leveraging Proven Mechanisms:** The attention mechanism has demonstrated effectiveness across numerous tasks. By minimally adapting this component, we aim to preserve this versatility in our operator. The $\rm{diff}(\cdot)$ operator retains the core benefits of attention while extending its application to approximate derivatives and enforce smoothness. ### [Weakness 3] **Answer**: We have indeed considered the approach of post-processing imputations from a standard Transformer. As discussed in lines 164 to 168 and in Table 1 under the rows labeled “GT1P”, “GT1S”, “GT2P”, and “GT2S”, we applied both PACE and kernel smoothing as post-processing methods to imputations from a standard Transformer model. The effectiveness of these methods is discussed in lines 302 and 307, where we note that while the total variation of the estimated function generally decreases, the improvement in MSE is marginal, and in some cases, the MSE actually increases after post-processing. These findings underscore the limitations of simply adding some smoothing post-processing to standard Transformers, regardless of which smoothing technique is used. We chose PACE as a popular imputation method for noisy functional data, and we chose kernel smoothing due to its simplicity and historical prominence as the canonical smoothing method. Following your suggestion, we have also applied trend filtering to the output of standard Transformer, by using `trendfilter` from the package `genlasso` in `R`. The results are in the PDF file attached to our global rebuttal. Although the improvement is greater than using PACE or kernel smoothing as post-processing, SAND remains the best overall method. Our revised paper incorporates and discusses these results, as well as accounting for the other valuable feedback. ### [Question 1] **Answer**: Yes, there are positional embeddings inside bolded $\tilde{T}$ and $\tilde{T}_c$. The notation is defined in lines 185 and 186: bolded $\tilde{T}$ (or $\tilde{T}_c$) are matrices with the first row being $T$ (or $T_c$), and the rest of $p$ rows are the positional encoding of the output grid. Thank you for the suggestion. Due to the limited space in the main text, we will add a notation paragraph to in the appendix. ### [Question 2] **Answer**: The subtraction of the first element serves multiple purposes: - **Geometrical Justification**: This adjustment ensures that two curves, which may have identical shapes but different intercepts (the first element in $T$), are treated equivalently by the $\rm{diff}(\cdot)$ operator. By subtracting the first element, we effectively remove the absolute positioning, focusing solely on the shape of the curve. - **Mathematical Rationale**: Our approach is inspired by the first fundamental theorem of calculus, which expresses a function as $f(b) = f(a) + \int_a^b f’(x),dx$ (refer to line 201 in the main text). In the context of SAND, $f(a)$ represents the first element of $T$, serving as the initial value. The $\rm{Intg}$ function acts as "$\int$", and $\rm{diff}$ approximates the derivative. Subtracting the first element aligns with this conceptual framework, where $f(a)$ is reintegrated post-differentiation to reconstruct the curve accurately (see Equation 5 between lines 189 and 190). - **Algorithmic Efficiency**: Subtraction normalizes the input data, enhancing the uniformity of the inputs. This normalization can lead to faster convergence of the $\rm{diff}(\cdot)$ operator’s parameters during training. Omitting the shift could potentially require more training epochs for SAND to achieve similar convergence, due to the variability in initial values across data samples. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you again for your valuable feedback on our paper. We have addressed your questions to the best of our ability and have revised our paper based on your suggestions (including additional results and more clearly written motivations). The revised paper is much stronger thanks to your feedback. Please let us know If you have any follow-up questions/concerns, and we will address them before the discussion periods ends on Aug 13. We value your insights and are eager to further improve our paper based on your follow-up thoughts. --- Rebuttal Comment 1.2: Title: Response Comment: I would like to thank the authors for their response. The added new experimental results further demonstrate the benefits of the new architecture. I have increased my score.
Summary: This paper proposes a new class of transformers for sparse and noisy functional data. In particular, a new module, namely self-attention on derivatives (SAND), is incorporated vanilla transformers to model the sub-derivative of the imputed curve, thereby promoting smoothness. The authors also theoretically prove the number of hidden nodes needed by the SAND transformer to achieve a certain prediction error bound for functional imputation. Empirical results are provided to justify the advantages of the proposed model. Strengths: 1. The proposed method, i.e., SAND, is interesting. 2. Theoretical properties of SAND is well-studied. 3. The paper is well-written with illustrative figures. Weaknesses: 1. Experiments of larger-scale benchmarks are needed to justify the advantages of SAND. 2. Efficiency analysis of SAND is missing. 3. The advantages of using attention in SAND is not clearly discussed in the paper. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Can you please elaborate more on the use of attention in SAND? Why do we need attention there? After Rebuttal I will increase my score from 4 to 5. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable and positive feedback, which we have used to strengthen our paper. Please see our responses to the weakness below. Our responses to the questions are listed in the global author response. ### [Weakness 1] **Answer**: Thank you for highlighting this aspect. We have not scaled up the simulation for three main reasons: - **Simulation Constraints**: As detailed in line 267, each simulated case requires one day for processing on an Nvidia GeForce GTX 1080 Ti. With a total of 88 cases outlined across Tables S2-S10 in the supplementary material, completing all simulations under these conditions requires nearly three months. Even with early-stopping techniques, it still takes up to two months to complete the simulation. These time constraints make larger simulations impractical under our compute budget. - **Relevance to Real-World Data**: The sample size of 10,000 in our simulations is substantial, particularly when compared to real datasets we have used, such as the UK electricity dataset with 5,600 samples and the Framingham Heart Study with 870 samples. Simulating data at a scale comparable to real-world datasets ensures that our findings are both realistic and applicable. - **Effectiveness of SAND at current scale**: Despite perceptions that 10,000 samples may not seem extensive, our simulations demonstrate a significant advantage of SAND over nine competitors. This indicates robust performance even at the current scale, supporting the effectiveness of SAND on datasets of similar size to popular real-world functional imputation tasks. ### [Weakness 2] **Answer**: Thank you for highlighting the importance of providing a detailed efficiency analysis. SAND modifies the conventional attention module by omitting the softmax operation, thereby aligning its computational complexity with that of the original attention mechanism, as detailed in Table 1 of Vaswani et al. (2017). Let $m$ denote the number of observations in a subject, $h_d$ be the representation dimension, and $k$ be the kernel size of convolutions. We summarize the computational efficiency of SAND as follows: - **Compared to Recurrent Modules**: SAND requires $O(1)$ sequential operations, regardless of sequence length, facilitating faster computation and better suitability for parallel processing. In contrast, recurrent layers inherently require $O(m)$ sequential operations due to their dependency on previous outputs for current computations, which significantly slows down processing and limits scalability. - **Compared to Convolutional Modules**: The computational complexity of SAND is O(m^2\cdot h_d), whereas for convolutional modules, it’s $O(k\cdot m\cdot h_d^2)$. In our simulation and data applications, the number of observations per subject $m$ is at most 30 while the representation dimension is 128. Given these parameters, convolutional operations become computationally intensive, leading to slower performance compared to SAND. SAND, analogous to a single attention module, introduces minimal additional computational overhead compared to standard transformers, which typically comprise hundreds of alternating layers of attention and feed-forward modules. We acknowledge the significance of this aspect and our revised paper clearly articulates the computational complexity of SAND as well as its computational advantages. Regarding SAND’s prediction efficiency (in the sense of statistical estimation): Our evaluation of SAND’s prediction efficiency, measured through relative MSE, is detailed in Table 1. The results indicate that SAND is approximately 9% more efficient than a standard transformer and 11% more efficient than the best non-transformer benchmark, which is PACE. This demonstrates not only SAND’s computational advantages but also its superior accuracy in predictive tasks. This multi-dimensional approach to assessing efficiency—encompassing computational, estimation, and prediction aspects—provides a robust evaluation of SAND’s performance across various metrics. ### [Weakness 3] Thank you for pointing out the need for a clearer discussion on the benefits of incorporating the attention mechanism in SAND. Here are the key advantages: - Performance Enhancement: In section 5.1, our extensive experiments demonstrate that SAND significantly improves imputation accuracy, as evidenced by reductions in mean square error and total variation compared to standard methods. This performance boost underscores the effectiveness of the attention-based approach in handling functional data. - Addressing Non-linearity: Traditional post-processing techniques (see Table 1 in the manuscript) in functional data analysis, such as PACE, are typically linear and may fail to capture complex, non-linear features in the data (lines 101 to 102 in the manuscript). SAND effectively bridges this gap by utilizing the non-linear modeling capabilities of the attention mechanism, allowing for a more nuanced and powerful analysis of functional data. - Computational Efficiency: Leveraging the attention module within SAND capitalizes on the computational efficiencies discussed in response to [Weakness 2] and as detailed by Vaswani et al. (2017). Our revised paper more clearly lists these advantages of utilizing the attention module in SAND. ### [Question 1] **Answer**: Please see our response to the integrated question in the global author rebuttal for this question. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you again for your valuable feedback on our paper. We have addressed your questions to the best of our ability and have revised our paper based on your suggestions (including additional results and more clearly written motivations). The revised paper is much stronger thanks to your feedback. Please let us know If you have any follow-up questions/concerns, and we will address them before the discussion periods ends on Aug 13. We value your insights and are eager to further improve our paper based on your follow-up thoughts. In particular, our revision now more clearly motivates the use of attention. Any feedback on this updated explanation would be appreciated! --- Rebuttal Comment 1.2: Comment: Thank the authors for the detailed response. Can you please provide SAND's runtime and memory usage in comparison with the baseline? I know asking for additional experimental results less than 1 day before the deadline is inappropriate, but those results should be easy and quick to acquire. I need to know SAND's runtime and memory usage compared with the baseline to decide to raise my score or not. --- Reply to Comment 1.2.1: Comment: Thank you for your request for additional details concerning the runtime and memory usage of SAND compared to baseline methods. Below, you’ll find a table comparing SAND with eight other baseline models. The table includes each model’s number of parameters, memory usage, runtime, and the number of epochs required for convergence. It highlights that SAND, while enhancing the capabilities of a standard transformer, requires only minimally more memory and runtime. Notably, while models such as CNP and GAIN have smaller model sizes, their requirement for a significantly higher number of epochs (as per the epoch suggestions on their respective GitHub repositories) to achieve convergence means that their total runtime may not be shorter than that of transformer-based models. This aspect is crucial in understanding the efficiency and practical applicability of SAND in real-world scenarios. | | Use GPU? | num of params | memory usage* | Runtime | num of epochs | |:--------------------:|:--------:|:-------------:|:------------:|:----------------:|:-------------:| | PACE | X | NA | 2.84GB | 41 secs | NA | | FACE | X | NA | 11.20GB | 1218 secs | NA | | mFPCA | X | NA | 4.37GB | 635 secs | NA | | MICE | X | NA | 434MB | 1800 secs | NA | | 1DS | X | NA | <10MB | 5 secs | NA | | CNP | V | 83K | 310MB | 1.8 secs/epochs | 50,000 | | GAIN | V | 164K | 460MB | 1.2 secs/epochs | 150,000 | | Standard Transformer | V | 930K | 3.61GB | 15.6 secs/epochs | 5,000 | | SAND | V | 996K | 3.97GB | 16.4 secs/epochs | 5,000 | * Memory usage values represent peak usage during the training phase.
null
null
Rebuttal 1: Rebuttal: We are grateful for the detailed feedback from all reviewers, which has significantly contributed to refining our manuscript. We would like to address a specific concern raised by two reviewers (**92dE** and **pcLe**) regarding the use of attention in SAND's $\rm{diff}(\cdot)$ operator and the design of it. These concerns relate to fundamental aspects of our methodology and its innovative application in functional data imputation. ### [Integrated Question] Can you elaborate on the conceptual foundations and decision-making process behind the integration of the attention mechanism and the design of the $\rm{diff}(\cdot)$ operator in SAND? **Answer**: We arrived at our decision to use the attention module in SAND as follows: We initially started with a coarse imputation from a standard Transformer and aimed to refine its poor performance. One intuitive method could involve adding penalties to the standard transformer to constrain its outputs or applying posterior smoothing techniques to enhance the coarse imputation. We discuss the limitations of these approaches in lines 156 to 169 and benchmark their empirical performance in section 5.1. An alternative approach is to apply a “patch” to the standard transformer such that the resulting architecture outputs a smooth imputation, a process depicted in Figure 2. Conceptually, we could utilize any machine learning model that handles vector inputs and outputs to achieve this, as there are numerous viable options. We adapt the attention module as our patch in SAND for several reasons: - Computational Efficiency: SAND is computationally efficient compared to other well-established vector-to-vector models like recurrent networks or convolutional networks. This efficiency is crucial in our choice, as highlighted by Vaswani et al. (2017). The attention module itself has demonstrated effectiveness across diverse applications, thanks in great part to its scalability. - Achieving Smooth Outputs: Our objective is to produce a smooth curve, which requires that its first derivative be continuous. Since most neural networks inherently model continuous functions, they are suited to this task. The first derivative $f^\prime(a) \approx \frac{1}{b - a}[f(b) – f(a)]$ involves a linear combination of function values. In constructing SAND, we chose the attention module because it inherently performs a weighted summation of its inputs, aligning well with our needs for modeling derivatives (as detailed in Equation 1 and lines 130 to 134 in the manuscript). These considerations guided our decision to employ the attention mechanism in SAND, which enjoys favorable theoretical properties and empirical performance. We provide individual responses below to address the remaining concerns from each reviewer to improve clarity of missing details and to provide additional discussion that strengthen our paper. We thank all reviewers’ for their time and efforts! We hope our responses have persuasively addressed all remaining concerns. Please don’t hesitate to let us know of any additional comments or feedback on improvement. Note that we include all additional experimental results in the one-page pdf submitted along with this global rebuttal response. Pdf: /pdf/c06e1243c4755ab808f96ecd11ce5762a9763609.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Structured Unrestricted-Rank Matrices for Parameter Efficient Finetuning
Accept (poster)
Summary: The paper explores the use of structured matrices instead of low-rank matrices for approximating the finetuning updates for Transformer models. The sturcture imposed on the approximation matrices determines the number of trainable parameters. The paper explores use of two structured matrices: Circulant and Toeplitz matrices, and Kronecker product of Matrices for finetuning language models and vision models. The paper shows that structured matrices are better at approximating various class of matrices like random (full rank), near-low-rank and near-low-intrinsic-rank than low-rank matrices, and this translates to better performance when using them for finetuning at a lower parameter cost. Strengths: - Use of structured matrices for finetuning achieves on-par/better performance than prior works at a lower parameter cost. - The toy experiments show that structured matrices can approximate matrices better than low-rank matrices (controlling for the number of parameters). The authors verifiy this through two experiments: - Approximating Symmetric Positive Definite Matrices - Fitting a toy dataset (pinwheel with Gaussian noise) using a neural net with layers composed of structured and low rank matrices. Weaknesses: 1. **No studies on more difficult tasks**: - While I understand that the authors have shown the effectiveness of the proposed method on smaller models, it would be interesting to see how the proposed method performs on larger models for more difficult tasks like language generation (for e.g., insturction tuning) or math/commonsense reasoning which are commonly used to study PEFT methods on large language models. 2. **Lack of details on choice of hyperparameters**: - The paper does not provide sufficient details about how the data is split, if all the methods are compared by training and evaluating on the same data splits. - Moreover, the paper does not provide details on how the hyperparameters are chosen, but only the final values are mentioned. (Section E in the appendix.) 3. **Minor presentation details**: - Figure 1: caption says the presented methods are in top left (in green), but appear to be in the top right in the plots, and also do not apprear to be in green. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. **Section 4.1**: - What is the significance of Section 4.1? What does the comparison between Circulant and Toeplitz matrices with W(G,H) convey? Could the authors provide more explanation about why this comparison is done, especially since the final PEFT algorithm uses only Circulant and Toeplitz matrices? - Do all the methods in this study approximate the exact same matrix? Why do the initial errors vary by such a large margin across W(G,H), and Circular and Toeplits matrices? - Why do Circulant and Toeplitz matrices use more iterations than W(G,H) (200 vs. 2000)? If the number of iterations is the same (i.e., W(G,H) is optimized for 2000 iterations), do the results change? - Why is there no study for approximation of random matrix using Toeplitz Circular matrices? - Why is low-rank(I=20) + epsilon not considered for Circular and Toeplitz matrices? 2. **Experiments on image classification using ViTs**: - What is the pretraining data and task for the base ViT? - How is the data split into train, validation and test splits, and on which split is are the results reported? - How and on which split are the hyperparameters tuned? Section E, lines 699-700 mention that the experiments use a learning rate of 5e-5, but SVHN uses 5e-4. How is this chosen? 3. **Experiments on GLUE**: - What is the difference bewteen SURM (Circular) and SURM (Circular-LoRA/Adapter) (similarly Toeplitz and Kronecker)? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank the Reviewer for their very valuable feedback and comments. > Studies on larger tasks: One of the key things that we showed in our experiments was the improved performance in the low data regime. Specifically across a plethora of low-resource image experiments (VTAB-1K) we have shown the improved performance of SURMs (circulant). The NLP experiment on Glue was provided to showcase the ability of SURMs to extend to other modalities (text) and another class of small parameter models (adapters). We will make it more clear in the paper. > Choice of hyperparameters: Datasplit: For vit-image experiments (table 1) we used the standard dataset split in tensorflow tfds datasets (https://www.tensorflow.org/datasets). VTAB-1K provides its own split and code which we used (github). For image segmentation we used the same split as the Segment-Anything-Model (SAM) [33]. For Glue we used the standard train/dev split (https://gluebenchmark.com/). > How the hyperparameters are chosen All hyperparameters are tuned on training/val set and numbers are reported on separate test set. We will update the manuscript with this information. Thanks for pointing out the typo in the caption of Figure 1. We will fix them. > Significance of Section 4.1: motivation, comparisons and conclusion We apologize for the confusion. Our motivations for the two separate parts (left and right column of figure 4) are different. To save space we put these two experimental results together. Our motivation for the left column is to showcase the ability of general LDRs (defined in eq 2) to approximate various matrices i.e. random, low rank and low intrinsic rank and motivate our choice of LDRs with r=1. From the bottom two figures on the left (i.e. approximating low rank and low intrinsic rank with LDRs) we learn that LDRs (with r=1 and r=2) are great at reducing the approximation error (They are in the top 3 best approximating errors (lowest error)). This further provides motivation and empirical justification that LDRs with (r=1) may be well suited to approximate matrices with some structure. The motivation for the right column is to compare and contrast the two variants of LDRs that we proposed i.e. circulant and Toeplitz. Hence we should not be comparing the right with the left column. Please see the left and the right as its own set of experiments. In each figure, all techniques approximate the same matrix. The first set of results shown are after a single iteration hence the difference within the same figure. We can provide the left column experiments with a higher number of iterations but the objective is to discriminate among all the LDRs. > What is the pretraining data and task for the base ViT? We used the vit-base model and the clip-base model from HuggingFace (https://huggingface.co/google/vit-base-patch16-224) > How is the data split into train, validation and test splits, and on which split are the results reported? For vit-image experiments (table 1) we used the standard dataset split in tensorflow tfds datasets (https://www.tensorflow.org/datasets). VTAB-1K provides its own train/dev/test split and code for the same, which we used (github). For image segmentation, we used the same split as the Segment-Anything-Model (SAM) [33]. For Glue we used the standard train/dev split (https://gluebenchmark.com/). All numbers are reported on the test set (no hyperparameters were trained on. Thanks for pointing this out, we will update it in the manuscript > How and on which split are the hyperparameters tuned? Section E, lines 699-700 mention that the experiments use a learning rate of 5e-5, but SVHN uses 5e-4. How is this chosen? For image classification experiments we used the same setup for all datasets except for SVHN where the training accuracy didn’t move much when using 5e-5 as the learning rate. All hyperparameters have been turned on the training/dev set while reported results are on the test set. > What is the difference between SURM (Circular) and SURM (Circular-LoRA/Adapter) (similarly Toeplitz and Kronecker) used in GLUE? In the interest of saving space our detailed discussion regarding how SURMs can be used as Adapters has been moved to Appendix C. The difference between SURM(*-LoRA) and SURM(*-Adapters) is the setting under which low displacement rank matrices can be used. For example, circular-LoRa refers to using circulant matrices for lora style update ie $\hat{W} = W + \alpha \Delta W$ where $\Delta W$ is circulant (please see 5.1 for Lora style PEFT method). circular-Adapter implies using the adapter method for updating the model (please see equation 7 in section C in the Appendix). Our objective was to elaborate that both these styles of training can benefit from SURMs. [33] Customized Segment Anything Model for Medical Image Segmentation, Zhang et al 2023. [80] A large-scale study of representation learning with the visual task adaptation benchmark, Zhai et al. 2020. --- Rebuttal Comment 1.1: Comment: I appreciate the detaild response from the authors. My questions and weakness about hyperparameter tuning strategy have been addressed. However, I am still not convinced with the authors' reasoning about large scale experiments. Even in settings where data is scarce, when applied to larger models in the scale of billion parameters and to more complex tasks like language generation, methods are not guaranteed to work as well. Without these experiments, it's hard to judge the effectiveness of the method. Hence, I will maintain my score. --- Rebuttal 2: Comment: > Large data Regime We investigate the performance of SURM in a large data regime using the iNat2021 [1] dataset. iNat2021 has over **2.7 million** training images, 100K validation images, and 500K test images, representing a wide array of **10,000 species (classes)**. Full fine-tuning: **69.98%** vs SURM (circulant): **69.01%** We observe that SURM achieves similar results to full-finetuning using only **55K** parameters as opposed to **86M** in full fine-tuning. We will add these details to the manuscript. [1] Benchmarking Representation Learning for Natural World Image Collections, Horn et al. 2021.. > Large Model regime/Small data Regime Thank you for the excellent comment. We present an example below where *methods like LoRA and MoRA struggle to learn a complex task in a low data regime with a large-scale model.* Following the suggestion reviewer o41z, we created 10k pairs of UUIDs and tested the memorization capability of LLMs. We use the large-scale **Llama-2-7B** model [2]. The goal of this experiment is to show that models struggle to learn out-of-distribution data when using low-rank updates. Since the pdf can not be updated, the results are presented below: | Method &#8595; / Steps &#8594; | 25 | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | 3000 | |----------------------------------|------|------|------|------|------|------|-------|-------|-------|-------|-------|------| | LoRA (2.9% Param) | 2.42 | 2.29 | 2.30 | 2.29 | 2.28 | 2.28 | 2.28 | 2.28 | 2.29 | 2.27 | 2.29 | 2.29 | | MoRA (2.9% Param) | 2.43 | 2.29 | 2.30 | 2.28 | 2.28 | 2.28 | 2.29 | 2.28 | 2.29 | 2.27 | 2.29 | 2.26 | | Circulant (0.01% Param) | 3.40 | 3.32 | 2.96 | 2.70 | 2.35 | 2.04 | 1.92 | 1.83 | 1.78 | 1.74 | 1.72 | 0.97 | | Circulant+Skew-circulant (0.04%) | 3.78 | 0.34 | 0.09 | 0.05 | 0.02 | 0.02 | 0.006 | 0.009 | 0.004 | 0.003 | 0.001 | 0.0 | We observe that LoRA and MoRA struggle to fit the data (cross-entropy loss around 2.3) whereas our circulant variant achieves a loss of **0.97**. In this experiment, we used a high rank=256 for both LoRA and MoRA, and modified the $Q, K, V$ parameters for all methods. Furthermore, we show the effect of increasing the number of training parameters by using sums of products of circulant and skew-circulant matrices. A skew circulant matrix $S = (s\_{jk})\_{j,k=0}^{n-1}$ is said to be skew-circulant if $s\_{jk} = s\_{j−k}$ and $s\_{−l} = −s\_{n−l}$ for $1 \leq l \leq n − 1$. The motivation for using this particular sum of products comes from the approximation quality of such matrices (see Theorem 1 in [3]). This is evident in practice as circulant+skew-circulant variant obtains a loss=**0** and converges much faster. This result and our toy experiment (Figure 5) consistently show that low rank updates may struggle to fit various data regimes and that unrestricted-rank matrices may be required to alleviate this issue. SURMs solve this problem using structured matrices (keeping the parameter budget low) while allowing for arbitrary ranks. We will add this result to the manuscript. [2] MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning Jiang et al. 2024 [3] Structured Transforms for Small-Footprint Deep Learning Sindhwani et al. 2015 --- Rebuttal Comment 2.1: Comment: I appreciate the authors' efforts on experiments with Llama 2 7B. However, I'm suspicious about the experimental settings used for the latest experiments on UUID memorization. From the original MoRA paper [1], LoRA and MoRA can indeed memorize UUIDs well, as evidenced from the 100% character level accuracy (Table 2) in [1]. However, the authors' experiments do not reflect this. Furthermore, why have the authors reported cross-entropy instead of accuracy as done in the original paper? This discrepancy looks suspicious, and warrants more careful experimentation. I would encourage the authors to investigate this in detail. Given these results, I'm afraid I cannot change my score. [1] MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning Jiang et al. 2024 --- Reply to Comment 2.1.1: Comment: Dear Reviewer Kv7A, Thank you for the excellent question. The difference between our results and that reported in MoRA [1] arises because the authors in [1] use LoRA in **all** linear layers whereas we only use it in $Q, K, V$. We aim to show that in this low-parameter regime, LoRA and MoRA struggle to fit the data whereas SURM excels in this task. The cross-entropy training loss is a valid metric for this task and was reported in Fig. 2 in the original paper [1]. We found that the generation quality depends quite a lot on the hyperparameters (which are not open-sourced yet) and couldn’t exactly replicate the results presented in [1]. Therefore, for a fair comparison, we report the convergence of the training loss for all methods. [1] MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning Jiang et al. 2024
Summary: The paper introduces a new technique called SURMs which aim to use structured matrices for PEFT. The technique is tested against many different adapter variants in Vision Adaptation and Natural Language. The unique structure of the matrices allow for efficient computation of the products Strengths: - The idea of structured matrices compared to low rank ones as avenue of exploration is interesting. - Many different adapters are used during evaluation - The figures are clear well thought out Weaknesses: The bar for a new variant of LoRA to be substantial contribution to me is fairly high, just due to how crowded the space is. Structured matrices for low rank adaptation has previously, as noted in the paper Knoecker products have been in previous literature, while Circulant and Toeplitz structures are, to my knowledge, have not. Nevertheless, Structured matrix adaptation has been explored in several prior papers. The change in structure of the matrix or rank alone to me is not sufficient unless there is substantial evidence it provides benefit, both empirically and intuitively/theoretically. The intuition and/or theory as why we might expect this class of matrices to be a useful, in terms of improved adaptation, is lacking (beyond the FFT-based efficiency). As stated previously, the space is fairly crowded for this line of research and every new modification cannot be considered significant without substantial evidence and careful reasoning behind exploration. The empirical evidence for this change also needs to be improved upon. PEFT experiments need to largely be conducted in regimes where full-finetuning represents a soft upper bound on the achievable performance. The most common use case for PEFT techniques is improving finetuning compute/memory efficiency, while retaining as much performance as possible. In contrast, for regimes where we don't have sufficient data for full fine-tuning to perform best, it seems that we mainly end up measuring the ability of the technique to reduce overfitting, rather than the techniques ability to retain the learning capacity of the original model at lower compute levels. This is still useful, of course, but there are many other techniques to do this outside of PEFT, and it misses evaluation for the primary use case of PEFT. As a reference, I think the [MoRa](https://arxiv.org/pdf/2405.12130) paper does a decent job at this. The original LoRA paper doesn't do a great job with this and oversells, as the community knows from countless experiments with it. To effectively prove out which techniques hold up best its very important to conduct detailed evaluations at scale with models which can achieve SoTA on a particular dataset. The following would make the paper much stronger: - Stronger intuitive/theoretical justification for why we might expect this class of adaptations to perform better (with some small scale experiments or ablations proving the intuition) - Experiments with models in NLP and Vision, where we full finetuning generally performs best (the UUID memorization from MoRa is a good example). Imagenet seems like a good candidate assuming techniques have not pertained on that data. Larger NLP datasets seem like another good candidate (extra AR pretraining, adaptation to larger downstream tasks). From my perspective, if I actually wanted to train on the the many of the small vision datasets for example, there are many different alternate fine-tuning techniques or different models I can use to improve performance for data-starved cases, especially since those cited perform worse than other fine-tuning techniques on these datasets. (Cifar-100, DTD, SUN for example, even using the same model arches). The paper needs to do a more careful job of picking SoTA finetunings on larger datasets and using PEFT on those. **Only PEFT can really target the case where I have large amount of fine-tuning data, but not necessarily the compute/memory to target it with a very large model well, so this regime should be emphasized in the experiments.** Technical Quality: 3 Clarity: 3 Questions for Authors: What are the ranks of the other adapters used? Why do you only test against rank 1 lora in Table 3? What how does the techniques performance change with rank? Why might we expect these structures to perform better than other structured matrices? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper should address limitations more directly Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for their valuable feedback and comments. > Structured matrices for low rank adaptation has previously, as noted in the paper Knoecker products have been in previous literature, while Circulant and Toeplitz structures are, to my knowledge, have not. Nevertheless, Structured matrix adaptation has been explored in several prior papers. To the best of our knowledge, the entire class of low displacement rank matrices (LDRMs), a subclass of Structured Unrestricted-Rank Matrices (SURMs) have not been explored in the context of PEFT. This includes instantiation of LDRMs including Circulant and Toeplitz matrices. As described in line 131 in section 3, Kronecker is not necessarily an LDRM but admits to efficient matrix-vector multiplication and does fall under the general umbrella of SURMs. The key novelty is our proposal of LDR-SURMs as general approximators that are not restricted to low rank. This is further supported by our experimental results. > Motivation / Intuition / Justification for our proposed LDRMs After intuitively noting the inability of LowRank approximation to capture higher order updates, we have explicitly (empirically) shown in Introduction Figure 1 (right) that Toeplitz and Circulant matrices do a better job of approximating a PSD matrix as compared to LowRank matrices. *It is well known that finetuning updates are full rank*. Our motivation is further supported by section 4.1 which clearly shows the higher approximation ability of circulant and Toeplitz matrices. Moreover in section 4.2 we have specifically focused on the approximation quality of low rank matrices vs SURM and clearly show the advantages of LDRMs. We believe that we have provided sufficient intuition and motivation (empirically) for the use of LDRMs in particular and SURMs in general. > Experimental evidence in support of LDRMs We would like to emphasize that we've presented extensive experimental evidence showcasing the efficacy of SURMs. First, we compared SURMs on 6 datasets and compared our methods against **12** strong baselines. There are several analyses in the small data regime. Second, we have included results for CLIP models and a comparison on the VTAB-1K [80] benchmark, which is designed to evaluate models on diverse tasks using few examples. Third, we further have applied our work on Image segmentation and shown that we are comparable with specialized architectures developed for SOTA on image segmentation. Overall, these sets of experiments demonstrate the utility of LDRMs especially in the low data regime. > SoTa baselines for low resource setting/smaller vision datasets. We would like to point out that VTAB-1k datasets [80], which is designed to evaluate models in the low-resource regime. We report the baseline numbers from [29] and [49], which are the current state-of-the-art. Similarly for image segmentation tasks (table 3) SAMed[33] is a dedicated SoTa model trained for image segmentation tasks. These results show that SURM-based finetuning obtains the SOTA performance on several benchmarks. > What are the ranks of the other adapters used? We report the adapter baseline performances from their respective papers and their rank is 48. > Why do you only test against rank 1 lora in Table 3? SAMed [33] is an adaptation of Lora with higher rank (rank 4, the details are mentioned in Appendix E) > What how does the technique's performance change with rank? Our Circulant/Toeplitz matrices are not parameterized by rank. We have explored two ways in which the number of parameters can be increased for circulant matrices. a) $ C = \sum_i a_i C_i $ where each $C_i$ are circulant and b) $ M = \prod_i M_i $ where $M_i$ are Toeplitz (line 140.) with a modest boost in performance but at the cost of speed. We will add these details in the supplementary. Interestingly we found that the circulant and Toeplitz updates to be full rank and that the Kronecker updates to the maximum possible rank (see Appendix A) > Why might we expect these structures to perform better than other structured matrices? We have motivated the use of structured matrices other than low rank by showcasing how certain structures like PSD are better approximated by LDRs. Moreover, in section 4 we have empirically shown that in multiple scenarios LDRs perform better than low rank matrices. > results on larger datasets ImageNet results are presented in Table 7 (appendix H). In that case, our method compares favorably to full finetuning where we use only a negligible fraction of training parameters. [29] Fact: Factor-tuning for lightweight adaptation on vision transformer. Jie et al 2023 [33] Customized Segment Anything Model for Medical Image Segmentation, Zhang et al 2023. [49] Towards efficient visual adaption via structural re-parameterization. Luo et al 2023 [80] A large-scale study of representation learning with the visual task adaptation benchmark, Zhai et al. 2020. --- Rebuttal Comment 1.1: Title: Larger Scale Comment: Given the new information, I raised my score to a 4. I feel that there needs to be more testing in higher data regimes to be accepted. Specifically for the following reasons: - In cases where full fine-tuning performs best, the performance of SURMS don't seem to be as strong, such as on CIFAR-100 and CIFAR-10 - There are much stronger techniques for nearly all the datasets tested that the reported accuracy since in the low data regime, we can either use zero-shot embeddings, ICL, or finetune smaller networks rather than relying on adapter layers. - A high rank adapter makes most sense when the training process has to encode a lot of data, which outstrip low rank approximations learning capacity. So not only should there be more comparison against higher rank adapters such as MoRA but there also needs to be more testing in regimes where we expect higher ranks to be most useful, UUID memorization I feel like is an easy one to test at relatively low compute. --- Rebuttal 2: Comment: > Large Model regime/ UUID Experiments Thank you for suggesting the UUID experiment and revising your score. Following [1], we created 10k pairs of UUIDs and we tested the memorization capability of the **Llama-2-7b** model. the results are presented below: | Method &#8595; / Steps &#8594; | 25 | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | 3000 | |----------------------------------|------|------|------|------|------|------|-------|-------|-------|-------|-------|------| | LoRA (2.9% Param) | 2.42 | 2.29 | 2.30 | 2.29 | 2.28 | 2.28 | 2.28 | 2.28 | 2.29 | 2.27 | 2.29 | 2.29 | | MoRA (2.9% Param) | 2.43 | 2.29 | 2.30 | 2.28 | 2.28 | 2.28 | 2.29 | 2.28 | 2.29 | 2.27 | 2.29 | 2.26 | | Circulant (0.01% Param) | 3.40 | 3.32 | 2.96 | 2.70 | 2.35 | 2.04 | 1.92 | 1.83 | 1.78 | 1.74 | 1.72 | 0.97 | | Circulant+Skew-circulant (0.04%) | 3.78 | 0.34 | 0.09 | 0.05 | 0.02 | 0.02 | 0.006 | 0.009 | 0.004 | 0.003 | 0.001 | 0.0 | We observe that LoRA and MoRA struggle to fit the data (cross-entropy loss around 2.3) whereas our circulant variant achieves a loss of **0.97**. In this experiment, we used a high rank=256 for both LoRA and MoRA, and modified the $Q, K, V$ parameters for all methods. Furthermore, we show the effect of increasing the number of training parameters by using sums of products of circulant and skew-circulant matrices. A skew circulant matrix $S = (s\_{jk})\_{j,k=0}^{n-1}$ is said to be skew-circulant if $s\_{jk} = s\_{j−k}$ and $s\_{−l} = −s\_{n−l}$ for $1 \leq l \leq n − 1$. The motivation for using this particular sum of products comes from the approximation quality of such matrices (see Theorem 1 in [2]). This is evident in practice as circulant+skew-circulant variant obtains a loss=**0** and converges much faster. This result and our toy experiment (Figure 5) consistently show that low rank updates may struggle to fit various data regimes and that unrestricted-rank matrices may be required to alleviate this issue. SURMs solve this problem using structured matrices (keeping the parameter budget low) while allowing for arbitrary ranks. We will add this result to the manuscript. [1] MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning Jiang et al. 2024 [2] Structured Transforms for Small-Footprint Deep Learning Sindhwani et al. 2015 > Large data Regime We investigate the performance of SURM in a large data regime using the iNat2021 [3] dataset. iNat2021 has over **2.7 million** training images, 100K validation images, and 500K test images, representing a wide array of **10,000 species (classes)**. Full fine-tuning: **69.98%** vs SURM (circulant): **69.01%** We observe that SURM achieves similar results to full-finetuning using only **55K** parameters. We will add these details to the manuscript. [3] Benchmarking Representation Learning for Natural World Image Collections, Horn et al. 2021. Please let us know if you have any questions. --- Rebuttal 3: Title: Recommend Acceptance Comment: Thank you to the authors for the follow-up experiments and the detailed conversations. I believe that with the new experiments added to the paper and the following, the paper should be accepted. 1. Please add a sentence more information on how we change the number of parameters in this formulation (Skew-Circulant etc...) 2. Please discuss, as a hypothesis which lends itself to future work, why it is that this method seems to perform well at both low data and high data regimes. This is a bit counterintuitive to me. 3. Please address the limitations of this technique a bit more in-depth and when we expect this technique to perform worse than others. With the changes already presented, and the integration of the above points. I have raised my score to a 6 due to the strong cooperation of the authors towards providing additional information, experiments etc.. and the additional strong results and recommend acceptance. --- Rebuttal Comment 3.1: Title: Thank you for your feedback Comment: We sincerely thank the reviewer for their positive feedback and for revising their score. > How to change the number of parameters in this formulation (Skew-Circulant etc...) The skew-circulant matrix, like the circulant matrix, is parameterized by its first row. We consider a general update matrix given by the equation : $\Delta W := \sum\_{i=0}^{k-1} A\_i B\_i$, where $A\_i$ is a circulant matrix and $B\_i$ is skew-circulant matrix. Since both $A\_i, B\_i$ are parameterized by $n$ parameters, $A\_iB\_i$ has $2n$ parameters and thus $\Delta W$ has $2nk$ parameters which is the same as rank $k$ LoRA update. We will add these details to the manuscript. > Hypothesis on the effectiveness of our method in various data regimes. We believe our main hypothesis is that LDRs provide a better approximation of general matrices because they are not limited to being low rank. We evaluated this hypothesis and found it to be well-supported by several experiments (see Figure 1, left, and Section 4). In low-data regimes, where we have to learn a data distribution from a small sample size, the flexibility of LDRs helps improve our approximation. Similarly, in large-data regimes, where the data distribution is more complex, the lack of low-rank restrictions allows us to achieve better results. > Limitations of our technique. We hypothesize that in cases where the update matrix can be well approximated by a low-rank matrix, LoRA styled methods might converge faster. We will add this to our current limitation section (Appendix M).
Summary: The paper proposed a general framework for parameter-efficient fine-tuning, based on structured unrestricted-rank matrices (SURMs) to substitute LoRA or other parameter-efficient finetuning methods. Three variants of SURMs are included, named Kronecker, Toeplitz, and Circulant, based on the matrix type. The proposed SURM method generally achieves performance comparable with LoRA/adaptors with 50% or less parameters on vision datasets like CIFAR10, SUN397, and DTD. SURM can also be incorporated into adaptors. Strengths: 1. The authors introduced a novel approach to substitute LoRA. The proposed SURM method provides us with a novel view of how the matrix of the LoRA can be initialized and interacted with. 2. Integration experiments with adaptors show the possibility that this method may also be applied in other modules of models, only if they have low intrinsic ranks. Weaknesses: 1. The tasks include GLUE benchmarks, but the improvements are not convincing. Most of the tasks are in trade-off with other methods. Why not increase the trainable parameter up to the same level as other methods, for example, 0.9, to examine the upper bound performance? This could be part of the ablation studies, showing the scaling relationship between the performance and trainable parameters. 2. Experiments conducted in NLP should contain new architectures, such as LLaMa. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank the Reviewer for their very valuable feedback and comments. > NLP experiments: We'd like to emphasize the breadth and depth of experimental evidence we have provided for SURMs. First, we compared SURMs on 6 image classification datasets and compared our methods against **12** strong baselines. Furthermore, we have evaluated on the VTAB-1K[80] benchmark. VTAB-1K is designed for low resource settings to evaluate models on “diverse, unseen tasks with few examples”, see abstract from [80]. Across multiple low resource settings we find that circulant works better than all prior proposed methods. This is a key contribution of our work. We further have applied our work on Image segmentation and shown that SURM is comparable with specialized architectures developed for SoTa on image segmentation. Thus our experiments demonstrate the effectiveness of LDRMs especially in the low data regime. The results on GLUE are to showcase that SURMs can be extended to other modalities (text) and different modeling regimes (adapters). We will make this more clear in the manuscript. [80] A large-scale study of representation learning with the visual task adaptation benchmark, Zhai et al. 2020. --- Rebuttal 2: Comment: > Large data Regime We investigate the performance of SURM in a large data regime using the iNat2021 [1] dataset. iNat2021 has over **2.7 million** training images, 100K validation images, and 500K test images, representing a wide array of **10,000 species (classes)**. Full fine-tuning: **69.98%** vs SURM (circulant): **69.01%** We observe that SURM achieves similar results to full-finetuning using only **55K** parameters as opposed to **86M** in full fine-tuning.. We will add these details to the manuscript. [1] Benchmarking Representation Learning for Natural World Image Collections, Horn et al. 2021.. > Large Model regime Following the suggestion reviewer o41z, we created 10k pairs of UUIDs and tested the memorization capability of LLMs. We use the large-scale **Llama-2-7B** model [2]. The goal of this experiment is to show that models struggle to learn out-of-distribution data when using low-rank updates. Since the pdf can not be updated, the results are presented below: | Method &#8595; / Steps &#8594; | 25 | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | 3000 | |----------------------------------|------|------|------|------|------|------|-------|-------|-------|-------|-------|------| | LoRA (2.9% Param) | 2.42 | 2.29 | 2.30 | 2.29 | 2.28 | 2.28 | 2.28 | 2.28 | 2.29 | 2.27 | 2.29 | 2.29 | | MoRA (2.9% Param) | 2.43 | 2.29 | 2.30 | 2.28 | 2.28 | 2.28 | 2.29 | 2.28 | 2.29 | 2.27 | 2.29 | 2.26 | | Circulant (0.01% Param) | 3.40 | 3.32 | 2.96 | 2.70 | 2.35 | 2.04 | 1.92 | 1.83 | 1.78 | 1.74 | 1.72 | 0.97 | | Circulant+Skew-circulant (0.04%) | 3.78 | 0.34 | 0.09 | 0.05 | 0.02 | 0.02 | 0.006 | 0.009 | 0.004 | 0.003 | 0.001 | 0.0 | We observe that LoRA and MoRA struggle to fit the data (cross-entropy loss around 2.3) whereas our circulant variant achieves a loss of **0.97**. In this experiment, we used a high rank=256 for both LoRA and MoRA, and modified the $Q, K, V$ parameters for all methods. Furthermore, we show the effect of increasing the number of training parameters by using sums of products of circulant and skew-circulant matrices. A skew circulant matrix $S = (s\_{jk})\_{j,k=0}^{n-1}$ is said to be skew-circulant if $s\_{jk} = s\_{j−k}$ and $s\_{−l} = −s\_{n−l}$ for $1 \leq l \leq n − 1$. The motivation for using this particular sum of products comes from the approximation quality of such matrices (see Theorem 1 in [3]). This is evident in practice as circulant+skew-circulant variant obtains a loss=**0** and converges much faster. This result and our toy experiment (Figure 5) consistently show that low rank updates may struggle to fit various data regimes and that unrestricted-rank matrices may be required to alleviate this issue. SURMs solve this problem using structured matrices (keeping the parameter budget low) while allowing for arbitrary ranks. We will add this result to the manuscript. [2] MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning Jiang et al. 2024 [3] Structured Transforms for Small-Footprint Deep Learning Sindhwani et al. 2015 --- Rebuttal 3: Comment: Dear Reviewer GJ7U, We would like to once more sincerely thank you for all the comments and very useful feedback. We think that we have addressed in depth all Reviewer's questions. Please let us know. If the Reviewer has any additional questions, we would be more than happy to answer them. Yours sincerely, The Authors --- Rebuttal Comment 3.1: Title: Thanks for Response Comment: Thanks for the authors' response. My main concerns have been addressed. So I will keep my score.
Summary: This paper explores structured unrestricted-rank matrices (SURM) for parameter-efficient fine-tuning (PEFT) of large-scale Transformer models. This method (SURM) was the first to apply low displacement rank matrices (LDRM) which could support fast matrix-vector multiplication and showed flexibility in finding a balance between compactness and expressiveness. What they used were Circulant, Toeplitz (LDRM), also Kronecker matrix (not LDRM). The authors demonstrated improved accuracy in various data sets while replacing low-rank matrices in LoRA and showed a reduction in the number of parameters in the adapter. Strengths: 1. Using Circulant and Toeplitz for PEFT seems to be a novel technique and also very effective on many standard benchmarks. 2. Extensive experiments have been provided, which support the authors’ claim. 3. The writing is clear, and comprehensive supplementary material would be very helpful for the readers. Weaknesses: 1. Any reasons why the proposed version w/ the Kronecker product outperforms the previous works, such as Kadaptation? 2. Why is PSD approximation accuracy a good proxy task? is the weight delta during finetuning close to PSD by any chance? 3. As the author mentioned in the limitation section, there are many fancy accelerated implementations of LoRA. How slow is the proposed method compared to those LoRA implementations? Technical Quality: 3 Clarity: 3 Questions for Authors: see the weakness section. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I do not see any serious societal impact of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for their valuable feedback and comments. > Any reasons why the proposed version w/ the Kronecker product outperforms the previous works, such as Kadaptation? Kadaptation approximates the gradient update using: $\Delta W = \sum_{i=1}^n A_i \bigotimes B_i$ where $B_i$ is the outer product of two vectors (thereby is low rank) and $\bigotimes$ is the Kronecker product. In our case, we consider $n=1$ and do not require $A_i, B_i$ to be low rank. The uplift in accuracy also showcases the need to move away from low rank constraints on the $\Delta W$. These differences are more elaborated in Appendix F. > Why is PSD approximation accuracy a good proxy task? is the weight delta during finetuning close to PSD by any chance? The update $\delta W$ weights are close to full rank and thus we used PSD as a proxy for it. Across multiple applications we find that the $\delta W$ after finetuning is full rank. We will add details regarding the rank of the update matrix for general finetuning. > As the author mentioned in the limitation section, there are many fancy accelerated implementations of LoRA. How slow is the proposed method compared to those LoRA implementations? Theoretically (please see lines 122-130) we are faster with $O(n \log n)$ whereas LoRa is $O(n d)$. Practically the Circulant variant of SURM is at par with LoRA whereas Toeplitz is slower. --- Rebuttal Comment 1.1: Title: full rank vs PSD? Comment: Thanks for your response, I appreciate it. You mentioned $\delta W$ is almost full rank and you used PSD as a proxy for it. Why is a PSD matrix a good proxy for full-rank matrices? Am I missing something? --- Reply to Comment 1.1.1: Comment: Thank you for the question. Our goal is to show the approximation capabilities of SURM over a wide class of matrices. We compared SURMs capabilities to full rank random matrices in Fig 4 (top left). The next class we explored is Positive Definite matrices, which have full rank and some structure. It’s worth noting that trained neural network weights also exhibit certain structural patterns [4, 5]. In general, (Symmetric) Positive Definite (SPD) matrices are important in various applications in ML like convex optimization and kernel learning. SPD-NN is widely used in Riemannian optimization [1], manifold learning [2], and CNNs [3] among others. Given the extensive literature on the use of SPD matrices in machine learning, they provide an ideal test case to evaluate the approximation quality of SURM. [1] Riemannian Multinomial Logistics Regression for SPD Neural Networks, Chen et al. CVPR 2024 [2] A Neural Network Based on SPD Manifold Learning for Skeleton-Based Hand Gesture Recognition Nguyen et al. 2019 [3] U-SPDNet: An SPD manifold learning-based neural network for visual classification Wang et al. 2023 [4] The training process of many deep networks explores the same low-dimensional manifold, Mao et al. 2024 [5] Principles of Riemannian Geometry in Neural Networks, Hauser et al. 2017 --- Rebuttal 2: Comment: > Large data Regime We investigate the performance of SURM in a large data regime using the iNat2021 [1] dataset. iNat2021 has over **2.7 million** training images, 100K validation images, and 500K test images, representing a wide array of **10,000 species (classes)**. Full fine-tuning: **69.98%** vs SURM (circulant): **69.01%** We observe that SURM achieves similar results to full-finetuning using only **55K** parameters as opposed to **86M** in full fine-tuning. We will add these details to the manuscript. [1] Benchmarking Representation Learning for Natural World Image Collections, Horn et al. 2021.. > Large Model regime Following the suggestion reviewer o41z, we created 10k pairs of UUIDs and tested the memorization capability of LLMs. We use the large-scale **Llama-2-7B** model [2]. The goal of this experiment is to show that models struggle to learn out-of-distribution data when using low-rank updates. Since the pdf can not be updated, the results are presented below: | Method &#8595; / Steps &#8594; | 25 | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | 3000 | |----------------------------------|------|------|------|------|------|------|-------|-------|-------|-------|-------|------| | LoRA (2.9% Param) | 2.42 | 2.29 | 2.30 | 2.29 | 2.28 | 2.28 | 2.28 | 2.28 | 2.29 | 2.27 | 2.29 | 2.29 | | MoRA (2.9% Param) | 2.43 | 2.29 | 2.30 | 2.28 | 2.28 | 2.28 | 2.29 | 2.28 | 2.29 | 2.27 | 2.29 | 2.26 | | Circulant (0.01% Param) | 3.40 | 3.32 | 2.96 | 2.70 | 2.35 | 2.04 | 1.92 | 1.83 | 1.78 | 1.74 | 1.72 | 0.97 | | Circulant+Skew-circulant (0.04%) | 3.78 | 0.34 | 0.09 | 0.05 | 0.02 | 0.02 | 0.006 | 0.009 | 0.004 | 0.003 | 0.001 | 0.0 | We observe that LoRA and MoRA struggle to fit the data (cross-entropy loss around 2.3) whereas our circulant variant achieves a loss of **0.97**. In this experiment, we used a high rank=256 for both LoRA and MoRA, and modified the $Q, K, V$ parameters for all methods. Furthermore, we show the effect of increasing the number of training parameters by using sums of products of circulant and skew-circulant matrices. A skew circulant matrix $S = (s\_{jk})\_{j,k=0}^{n-1}$ is said to be skew-circulant if $s\_{jk} = s\_{j−k}$ and $s\_{−l} = −s\_{n−l}$ for $1 \leq l \leq n − 1$. The motivation for using this particular sum of products comes from the approximation quality of such matrices (see Theorem 1 in [3]). This is evident in practice as circulant+skew-circulant variant obtains a loss=**0** and converges much faster. This result and our toy experiment (Figure 5) consistently show that low rank updates may struggle to fit various data regimes and that unrestricted-rank matrices may be required to alleviate this issue. SURMs solve this problem using structured matrices (keeping the parameter budget low) while allowing for arbitrary ranks. We will add this result to the manuscript. [2] MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning Jiang et al. 2024 [3] Structured Transforms for Small-Footprint Deep Learning Sindhwani et al. 2015 --- Rebuttal 3: Comment: Dear Reviewer aa5x, We would like to once more sincerely thank you for all the comments and very useful feedback. We think that we have addressed in depth all Reviewer's questions. Please let us know. If the Reviewer has any additional questions, we would be more than happy to answer them. Yours sincerely, The Authors
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Marrying Causal Representation Learning with Dynamical Systems for Science
Accept (poster)
Summary: This paper aims to connect causal representation learning with parameter identification in dynamical systems. By doing so, existing causal representation learning approaches can be used for estimating parameter in dynamical systems with identification guarantees. On the other side, it also demonstrates the applicability of causal representation learning on real-world data. Experimental evaluation on simulated and real-world climate data show the effectiveness of applying causal representation learning approaches in solving parameter identification tasks in dynamical systems. Strengths: This paper focuses on an intriguing and emerging topic on parameter identification in dynamic systems. Particularly, the authors attempt to leverage existing causal representation approaches to address the parameter identification problem. Re-formulating parameter identification problem as a causal representation learning problem is an appealing insight. Weaknesses: 1: Contribution is not significant. Though the paper aims to set up the connection between CRL and parameter estimation, most of the discussions are based on CRL. This work is more like an application of CRL to parameter estimation in dynamical systems. The identifiability theories are mostly adopted from CRL. It is hard to understand their practical effectiveness in solving parameter identification task in dynamic systems. Providing specific dynamic systems as examples can be helpful in understanding the usage of these theories. 2: Missing important related works. Parameter identification has been addressed with identifiability guarantee, e.g., [a], while these works are not well discussed. [a] Yao, W., Chen, G., & Zhang, K. (2022). Temporally disentangled representation learning. Advances in Neural Information Processing Systems, 35, 26492-26503. 3: Experiments are insufficient. Only simulated wind data and real-data on sea surface temperature are considered. To have a comprehensive understanding of the effectiveness in dynamic systems, different types of dynamic systems should be considered. Ablation studies should be performed on scenarios where true functional form is known for verification. Technical Quality: 2 Clarity: 2 Questions for Authors: 1: Are the assumptions (e.g., assumption 2.1 and 2.2) wild? Given which properties the dynamic systems can satisfy these assumptions? 2: How is your work different from [a], which can deal with videos and noisy data under identification guarantee? 3: Why you use mechanistic neural networks? 4: How does the proposed approach handle heterogeneous data with multiple sets of parameters? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Reply to weaknesses (same order as given by the reviewer) 1. The [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF) clarifies our paper’s **novelty and contribution**. Additionally, we successfully demonstrated parameter identification in various existing systems. Please refer to `additional experiments` in the [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF). 2. We thank the reviewer for providing this line of related work ([a] is listed as [`8`] under our references). Our paper looks **superficially similar** to those, but they are **conceptually different**. Please allow us a quick clarification in this regard: 1. Reference [`8`] models dynamics in the **latent space**, whereas our approach models dynamics directly in the **observational** space. The reasons for this choice are detailed in the [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF). 2. Consequently, our framework addresses **a different hierarchy of problems**. For instance, in the cart-pole example from Figure 3 in [`8`], while [`8`] treats the time-varying ODE states (**cart position, pole angle**) as latent variables, our focus is on estimating higher-level, time-invariant parameters such as **gravity, cart mass, pole mass, and pole length** for the known system (see `additional experiments`). A similar distinction applies to the motion capture dataset in [`8`]. 3. Hence, we believe that both problems are **orthogonal yet equally important, and we encourage cross-pollination in future work**. We will thoroughly discuss these works in the updated manuscript and explore the potential applications of these approaches to scientific discoveries. 3. **Additional studies** are provided on systems with **known functional forms**, including the **cart-pole** system inspired by [`8`]. Please refer to `additional experiments` for details. We thank the reviewer for this suggestion and kindly ask them to reconsider their scores in light of the new experiments. While we cannot directly compare against [`8`] because we take as input the whole trajectory instead of modeling the dynamics step by step, we will carefully explain the differences and opportunities for synergy between the approaches. &nbsp; ### Reply to questions (same order as given by the reviewer) 1. The **validity of the assumptions** is discussed in the [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF). 2. The **difference** between our paper and the **provided reference** is discussed in the second point of “Reply to weaknesses.” 3. The reason **why we use MNN** is discussed in Appendix D (`lines 727-739`). We will refer to this link more clearly in the updated manuscript, and we apologize that this was not clearly referenced in the main text of the submitted draft. 4. Thank you for this intriguing question. It is indeed **possible** to identify latent parameters from **heterogeneous** time series data using multiview CRL algorithms [`6`], which assume multimodal mixing functions for different views. This paper primarily focuses on a single climate dataset to **establish a connection** between causal representation learning and dynamical systems; thus, the consideration of heterogeneous data is **outside** **the** **scope** of this study. However, we acknowledge that extending this approach to **heterogeneous** data is a **promising** area for future research. Broadly, we believe that many algorithms developed in causal representation learning can be adapted for dynamical systems, and we hope our paper lays the **foundations** for and **encourages** this future work. --- Rebuttal Comment 1.1: Title: We kindly ask if the rebuttal has addressed the reviewer's concerns Comment: Dear reviewer `yeqY`: We greatly appreciate the reviewer's constructive feedback and thoughtful consideration. As the discussion period concludes soon, we kindly ask if the rebuttal has addressed their concerns. Should any further questions or concerns arise, we are happy to provide clarification during the remaining discussion time. --- Rebuttal 2: Title: Author-Reviewer Discussion Reminder Comment: Dear reviewers Since we are approaching the end of review-author discussion. Please read the author's response and clearly acknowledge that if they have successfully addressed your concerns. Best, You AC
Summary: The authors draw a connection between assumptions in the (neural) ODE and causal representation learning (more precisely, latent variable identifiability, but I will follow the authors and refer to it as CRL) literatures, by framing ODE inverse problems as latent variable problems. A particular focus is given to the case where the parametric form of the ODE is unknown, and only trajectories (i.e., time series data) are given. The learned generator and latent variables in CRL are analogous to a flexibly parametrized ODE solver and corresponding parameters, respectively. An example is given where pairs of trajectories sharing some parameter values are interpreted as same-content-different-style observations, which can be block-identified according to previous work in CRL. The authors describe a specific learning framework based on mechanistic neural networks which encode the trajectories into not only the parameters of a flexible family of ODEs, but also hyperparameters of the solver, defining a flexible generative process of observed trajectories. Strengths: The authors state as motivation the need for successful real-world applications of CRL. This is an incredibly important problem to tackle, and cross-pollinating with the AI/ML for Science (specifically, inverse problems) community is a good place to start. There are many parallels between the areas of parameter identification in ODEs and latent variable identification in CRL: both recognize the significance of identifiability, but while it appears that application has outpaced theory in Neural ODEs, the opposite has occurred in CRL. The application of CRL theory to ODE inverse problems is hence original and potentially of great significance to both communities. Weaknesses: Although the paper draws mathematical connections between identifiability in inverse problems and CRL, I found the conceptual connection to be insufficient. The parameters in ODEs and latent variables in CRL, even if they can be made mathematically equivalent, are quite different in interpretation. The former represents physical parameters which need to be identified point-wise, whereas the focus in the latter (at least, within the scope of examples given in this paper) is typically in separating latent factors of variation, whereas identifiability of the factor itself can be up to arbitrary and unknown reparametrization (as long as it contains full and only information about that specific factor). For me, the lack of successful real world application of CRL is precisely because the community is unsure of what to do with these resulting factors, which have arbitrary units and are not physically interpretable. To be fair, the usual ODE identifiability seems to be for the case where the dynamics are known, and not the "ODE discovery" setting that the paper focuses on in Sections 3.2+. Indeed, if all that is given are trajectories without an existing physical theory, it is clearly underdetermined to find physically meaningful parameters. Nonetheless, to build a truly useful bridge between the two fields, I find that the paper misses a crucial discussion on what this typical style of CRL identifiability, perhaps with unknown dynamics, can provide to the inverse problems community. For example, concerning the statement on l174: > identifying climate zone-related parameters from sea surface temperature data could improve understanding of climate change because the impact of climate change significantly differs in polar and tropical regions. The paper would greatly benefit by giving more details on why CRL-style identifiability is important in studying this problem. For example, does the ATE estimation (l377) rely on identifiability, and would non-identifiable contemporaries fail in causal prediction? I really believe in the message of the paper and the mission to improve real-world applicability of CRL. However, unless the benefits of CRL-style identifiability for ODE inverse problems can be spelled out more explicitly, I do not believe the submission is ready for publication. Instead, I would strongly advocate for publication of a version where the following questions are at least partially answered. Technical Quality: 3 Clarity: 3 Questions for Authors: - (Repeated from above) Does the ATE estimation (l377) rely on identifiability, and would non-identifiable contemporaries fail in causal prediction? - The experiments also show improvements in classification, generalization, and forecasting---is there an explanation, even an intuitive one, for why CRL-identifiable models in particular might be preferable for these tasks? - Can the parameters learned by the pipeline always be interpreted as physical parameters of some underlying ground truth dynamics, or useful proxies thereof? Maybe if the MNN is well-specified? ### Other more minor points - Is it straight-forward to port over CRL results that rely on restricting the function class of the generator, e.g., orthogonal/sparse Jacobian, when it is supposed to represent an ODE solver? - The interpretation of dynamical systems as latent variable generative models may be the same approach taken in Bayesian inverse problems [1], for example see section 3.3 in the reference. If there is a connection here, maybe it should be discussed (disclaimer: I'm not at all an expert in Bayesian inverse problems). [1] Stuart, Andrew M. "Inverse problems: a Bayesian perspective." Acta numerica 19 (2010). Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As the paper does not introduce a new method but rather aims to draw connections, it is perhaps unnecessary to explicitly discuss the limitations. I think the authors do a fine job outlining potential future work in the final section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Reply to weaknesses We appreciate the reviewer’s thoughtful comments. We concur that identifying parameters from an unknown system is highly **challenging** and can be **difficult to interpret**. We also acknowledge that the **CRL-identifiability** provided in `Corollary 3.2` is **limited**, as it isolates certain parameters only up to a bijection. However, given that point estimates for parameter identification in **unknown** systems are **fundamentally impossible** (`lines 198-200`), our neural emulator, which achieves **strong forecasting** performance and provides **some level** of parameter identification, surpasses models that focus solely on forecasting without considering identifiability, as shown by comparison to the **non-identified TI-MNN** model in `Table 2` in the main text. We argue that CRL-identifiability remains valuable for various downstream analyses, even when the underlying system is unknown: 1. Identified latent parameters help **understand** **causal effects under intervention**. For example, in `Figure 6` (Appendix), the sea surface temperature drops when the inferred latitude-related variable $\\hat{\\theta}$ is permuted. This can be viewed as a **sensitivity analysis** within the dynamical system community, typically studied experimentally on a case-by-case basis. 2. Prior works [`19, 20, 21`] have demonstrated that **CRL-identified** latent variables **outperform** in related **downstream** classification tasks and exhibit **greater robustness** in **domain adaptation** **and out-of-distribution** scenarios. When these identifiable CRL algorithms are applied to **dynamical systems**, these advantageous properties are **retained**, as evidenced by `Figure 2` and `Table 2` in the main paper. 3. **Isolating covariates** information is crucial for treatment effect estimation, see the second point under `additional experiments` under [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF). &nbsp; ### Reply to questions (same order as given by the reviewer) 1. Thank you for this question. ATE estimation **critically depends on identifiability**. Our additional experiments demonstrate that **non-identified** parameters yield **meaningless** ATE estimates, whereas **identified** parameters provide an **increasing** trend. We will update this figure in the revised manuscript to reflect these findings. Please refer to the **second point** in `additional experiments` under [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF) for more details. 2. Intuitively, CRL-identified latent variables are valuable for downstream tasks because the learned representation accurately captures **all and only** the information about the ground truth parameters, **isolating confounding factors**, which is critical for treatment effect estimations (see `additional experiments`). For example, if a task is related to a parameter $\\theta$, a **non-identified** representation might include **arbitrary information unrelated** to $\\theta$. Using such representations for classification tasks could perform **no better than random guessing**. This is why we evaluate the learned representation on downstream classification tasks and robustness tests to validate its identifiability. 3. When the MNN (Mechanistic Neural Network [`22`]) is well-specified, the learned parameters can **correspond one-to-one** with physical parameters through a bijection. This bijection serves as a useful proxy in **sensitivity analysis** and provides **high-quality features** for downstream classification tasks, as previously discussed. If some physical parameters are known for certain trajectories, the **bijection** can be **learned explicitly**, allowing the mapping of the learned representation to the ground truth. &nbsp; ### Reply to other minor points (same order as given by the reviewer) 1. Thanks for raising this point. It is discussed in [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF) under “Identifying ODEs with specific properties.” 2. We agree that interpreting dynamical systems as latent variable models parallels approaches used in Bayesian inverse problems. As discussed in `lines 158-164`, **most Bayesian methods for parameter identification**, such as Gradient Matching [`4`], assume a **known** functional form but often **lack clear identifiability statements**. In contrast, our `Corollary 3.1` provides a theoretical framework that **explains** the successful empirical identification results observed with these methods. --- Rebuttal Comment 1.1: Title: We kindly ask if the rebuttal has addressed the reviewer's concerns Comment: Dear reviewer `aFgN`: We greatly appreciate the reviewer's constructive feedback and thoughtful consideration. As the discussion period concludes soon, we kindly ask if the rebuttal has addressed their concerns. Should any further questions or concerns arise, we are happy to provide clarification during the remaining discussion time. --- Rebuttal 2: Title: Author-Reviewer Discussion Reminder Comment: Dear reviewers Since we are approaching the end of review-author discussion. Please read the author's response and clearly acknowledge that if they have successfully addressed your concerns. Best, You AC
Summary: This paper proposes a theory and methodology for representation learning from dynamical systems. In particular, it proposes a model in which latent causal variables deterministically generate an observed time-series trajectory through an ODE, with the task of identifying the latent variables from observed time series. Theoretically, the authors devise identifiability conditions by drawing the causal representation learning literature, treating the ODE as the (black box) mixing function. To identify variables in practice, it is proposed to use mechanistic neural networks (MNNs) to learn the ODE mixing function. Empirical results show that the proposed method can strike a good balance between identification of the latent variables and predictive performance. Strengths: The topic of the paper (learning interpretable causal variables from dynamical systems) is of great importance for e.g. the sciences and this paper makes a significant conceptual step in this direction. The specific proposed problem (recovery of stationary, trajectory specific parameters) and the idea of applying causal representation techniques (such as the multiview approach) to dynamical systems data is an interesting, and to my knowledge, novel problem and approach. The proposed method for partially identifying parameters is a sensible combination of existing components (loss function from CRL literature, and MNN parameterization of ODE) that enables both accurate prediction of trajectories as well as identification of causal variables (by separating the causal variables $\theta$ from the ODE parameters $\alpha$). The experiments are well-motivated and show the competitive/superior performance of the proposed method over baselines in terms of prediction and identification. The paper is generally well written and has a clear structure, though some of the notation could be confusing at times (see suggestions below). Weaknesses: The novelty in terms of theory is fairly weak; the theoretical identifiability results directly follow from existing literature with minimal modifications, and does not exploit anything specific to ODEs (e.g. 1. are there certain ODE assumptions which enable identifiability, e.g. certain function types or sparsity in the ODE's equation; 2. how would the situation change with time-dependent latents)? In terms of significance, the assumptions underlying the method are very strong and may limit the applicability of the method (e.g. determinism, no latent temporal variables). Technical Quality: 3 Clarity: 3 Questions for Authors: - The results show that one can identify the causal variables (up to blocks). Does this mean that the ODE function is also learned correctly, and thus we can trust e.g. the ATE in Experiment 6.2? - What is forecast error in Table 2? (MSE?) Suggestion on presentation: It can sometimes be confusing how $x, \theta, $ and the parameters in $\alpha$ relate. It might be useful to include a diagram (e.g. plate notation) to show which variables are time-dependent, trajectory-dependent etc. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations of the method are mostly well-discussed in the text and conclusion. There are two other limitations I would encourage the authors to discuss. Firstly, as with most CRL methods, significant prior knowledge is required: in particular here, identifying twin trajectories where some unknown causal/latent parameters differ. Secondly, it is assumed that the ODE evolves over the observation space (e.g. in the sea surface temperature example, over the surface temperatures), whereas the true underlying ODE is more likely to involve other/latent temporal variables (e.g. amount of sunlight, polar ice cap coverage), on top of stationary latents. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and will address the concerns individually. &nbsp; ### Reply to weaknesses 1. Please refer to our general clarification on the **novelty and contribution** in [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF). 2. > are there certain ODE assumptions which enable identifiability, e.g. certain function types or sparsity in the ODE's equation? **Identification of ODEs with certain properties** is discussed in [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF). Please refer to the paragraph “Identifying ODEs with specific properties” for details. 3. > how would the situation change with time-dependent latents? **Identification of time-varying latents** is discussed in `Appendix D`, and we will clarify this reference in the updated version, referencing it in the main text. This issue is closely related to the temporal CRL approaches that model dynamics in the latent space [`8, 16, 17`]. We offer a brief discussion of this in the [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF) under "Why not model dynamics in the latent space" and will address the topic more thoroughly in the revised manuscript. 4. The validity and applicability of our **technical assumptions** are discussed in the [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF). &nbsp; ### Reply to questions 1. The ODE function is **learned implicitly** by the mechanistic neural network but with disentangled parameters, as evidenced by the **low forecasting error and OOD performance** (see `Table 2` in the main paper). 2. Thanks for this question. Yes, the forecasting error is the **mean squared error(MSE)** summed over the state dimension and averaged over batch and time dimensions. We will add this information in the updated manuscript. &nbsp; ### Reply to suggestion on the representation We thank the reviewer for this valuable advice. `Figure 6` in the Appendix illustrates such an overview of the architecture and relations between different variables (referred to in `lines 244-246`). We will clarify this link further in the revised version. --- Rebuttal Comment 1.1: Title: We kindly ask if the rebuttal has addressed the reviewer's concerns Comment: Dear reviewer `3NaZ `: We greatly appreciate the reviewer's constructive feedback and thoughtful consideration. As the discussion period concludes soon, we kindly ask if the rebuttal has addressed their concerns. Should any further questions or concerns arise, we are happy to provide clarification during the remaining discussion time. --- Rebuttal 2: Title: Author-Reviewer Discussion Reminder Comment: Dear reviewer Since we are approaching the end of review-author discussion. Please read the author's response and clearly acknowledge that if they have successfully addressed your concerns. Best, You AC
Summary: This paper bridges causal representation learning (CRL) with dynamical system learning. It introduces partially identifiable and practical models by merging methodologies from both CRL and dynamic systems. The authors develop models capable of handling out-of-distribution classification tasks and treatment effect estimation. Notably, they validate their approach using a wind simulator and real-world climate data, effectively demonstrating the model's potential in addressing causal questions related to climate change. Strengths: 1. This paper is one of the few causal representation learning works that claim contributions to theoretically sound methods with working and impactful real-world problem solutions. 2. The authors provided a theoretically sound analysis of the partial identifiability of the model. 3. The task chosen by the authors to validate their proposed method is very interesting and potentially impactful: climate model estimation. Solving such a problem and providing solutions with theoretical guarantees will be very impactful. Weaknesses: The proposed method claims partial identifiability of the invariant part of the dynamic models. There are existing works, such as [1], which study dynamic models in hidden space and demonstrate partial identifiability for the invariant part of the dynamic models. Can the authors provide some comments on why they chose to study the dynamic model in the observational space and how their proposed approach might be better than the existing work? [1] Li, Zijian, et al. "When and How: Learning Identifiable Latent States for Nonstationary Time Series Forecasting." arXiv, 2024, arxiv.org/abs/2402.12767. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Reply to weaknesses Thank you for the positive feedback and for providing this interesting related work (listed as [`17`] under our references). Although our paper may **superficially resemble** [`17`], there are **important differences** between the two. Please allow us to make a quick clarification: 1. The **hierarchy of the problem setting** differs: At a high level, [`17`] models dynamics in the latent space by treating **time-varying** latent variables as hidden states, while we model dynamics directly in the **observational** space. A discussion on this distinction is provided in the [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF) under "Why not model dynamics in the latent space." 2. The **mixing process** differs: We consider the **entire trajectory** as our entangled **observation**, generated by $(x\_{t\_1}, …, x\_{t\_N}) \= F(\\theta)$, where the **mixing** function $F$ is inherently **non-stationary** and depends on the timespan of interest. In contrast, the referred work [`17`] uses stationary diffeomorphism mixing and **implicitly** accounts for **non-stationary properties** within the latent space. 3. Our **definition of “stationary/invariant”** is different: [`17`] considers a partition of latent variables $z\_t^s$ as stationary when it is **invariant to the environment** $e\_t$, meaning there is no causal link from the environment $e\_t$ to $z\_t^s$. However, $z\_t^s$ still depends on time. In contrast, in our context, a stationary parameter is **time-invariant** and trajectory-specific, such as the pole length $l$, gravity $g$, cart mass $m\_c$, and pole mass $m\_p$ in the cart pole experiment (see `additional experiments` under [global rebuttal](https://openreview.net/forum?id=MWHRxKz4mq&noteId=n59QJ5gxJF)). 4. Overall, we believe that [`17`] is more closely related to [`8,16`], and that both lines of work are **orthogonal yet equally important**. We will provide a thorough discussion of this connection in the updated manuscript. --- Rebuttal Comment 1.1: Comment: Thank you. I've read your rebuttal, responses, and the other reviews. I will keep my score since it is already positive.
Rebuttal 1: Rebuttal: We are extremely grateful to the reviewers and AC for their time and valuable feedback. We very much appreciate that they found the problem we are tackling is *“very interesting and potentially impactful*”(`bh3V`), “*of great importance for e.g. science*”(`3NaZ`), “*incredibly important*”(`aFgN`), and “*intriguing and emerging*”(`yeqY`). We are happy to see that our idea of applying CRL techniques to dynamical systems is *“interesting and novel”* (`3NaZ`), “*original and potentially of great significance to both communities*”(`aFgN`). **Novelty and contribution (`3NaZ`, `yeqY`)** * As discussed in `lines 73-78`, this paper’s main contribution is providing **clear parameter-identifiability statements** for dynamical systems, whereas numerous **previous works** on ODE discovery [`1, 2, 3, 4`] refrained from doing so by explicitly stating that it is **unknown** which settings yield identifiability (`lines 165-166`). * To the best of our knowledge, our paper is the **first** to target **real-world scientific applications** and successfully demonstrate identification results on **raw measurements**, unlike **previous CRL** work which relies on synthetic or **heavily pre-processed** data (e.g. manually rendered images[`5, 6`] or extracted avatar skeleton [`9`]). **Validity of the technical assumptions (`3NaZ`, `yeqY`)** * Assumptions 2.1 and 2.2 are **standard and necessary** for parameter identification in dynamical system [`10, 11`]. Many ODEs satisfy these assumptions, including Lotka-Volterra ODE, Van der Pol oscillator, and chaotic systems such as the Lorenz attractor and the Rössler attractor. * Further, we justify in `Table 1` that **CRL assumptions** (`3.1-3.3`) **align** with standard assumptions 2.1 and 2.2, establishing the theoretical ground for cross-pollination between these two fields. * Nevertheless, we acknowledge the limitation of the determinism assumption, as discussed in `Section 6`. **Identifiability with ODE with specific properties (`3NaZ`, `aFgN`, `yeqY`)** * `Corollary 3.1` provides full identifiability for stationary parameters in ODEs with known functional forms, including **any parametric form** (linear, polynomial, even nonlinear). This is **supported** by various empirical studies from **prior literature** on equation discovery [`1, 2, 3, 4`] and attached **additional experimental results**. * For ODEs with partially known properties, such as a **sparse linear combination** $f_{\theta}(x, t) = \sum\_{i = 1}^{m} \theta_i \phi_i(x)$ **of various basis functions** $\phi_i$ within a comprehensive dictionary (e.g., SINDy-like scenarios [`1, 2, 3`]), `Corollary 3.1` ensures parameter identifiability under a sparsity constraint. Notably, the ODE relates to the parameters $\theta$ linearly, as detailed in the main paper (`lines 154-157`) and the appendix (`lines 653-663`). * **Many existing CRL works** assume certain properties on the generating process, such as **sparsity** [`14`] or **specific functional class** [`15`], which can be **directly imported** into our framework by replacing the multiview approach. **Why not model dynamics in the latent space (`bh3V,3NaZ,yeqY`)** * We model dynamics directly in the observational space because the underlying parameters $\theta$ **directly determine** the whole trajectory: $(x_{t_1}, \dots, x_{t_N}) = F(\theta)$. * **Almost all CRL works** assume the latent variables **directly influence** the observation, except for a few considering hierarchical latent models [`12, 13`], which we will thoroughly discuss in the updated manuscript. * As discussed in `Section 6`, we agree assuming direct observation of the states is limiting, and relaxing this assumption is an interesting future direction. However, since this paper is the **first to formally connect** CRL-identifiability with inverse problems in dynamical systems, we believe that exploring this perspective is beyond the scope of the current study. **Additional experiments (see attached pdf)** * We provide additional experiment results on specific dynamical systems with **known functional forms**, including eight highly complex systems from **ODEBench**[`18`] and the **Cart-Pole** system inspired by [`8`]. * For each system, we sample 100 tuples of parameters $\theta$ within a valid range to preserve properties like chaos. Each problem is solved exactly as described in `Corollary 3.1`. * The root-mean-square deviation (RMSE) is averaged across the parameter dimensions and sample size (100), reported via mean and std. * We observe **highly accurate point estimates** for all stationary system parameters $\theta$, validating `Corollary 3.1` **across various experimental settings**. Due to space constraints, we cannot enclose results for all 63 systems from ODEBbench[`18`]. Complete results and implementation details will be provided in the revised manuscript. * Comparing **ATE** estimates from non-identified and identified representations for SST-V2: * The attached `Figure 1` illustrates the estimated ATE from the **non-identified** representation **lacks a discernible pattern**, while the **identified** one exhibits a noisy yet **clear increasing trend**, indicating the global warming effect. * This is because the non-identified representation failed to isolate the covariates $\theta$, leading to biased treatment effect estimates. To estimate treatment effects, the **covariates** (i.e., the latitude-related parameters we identify) **mustn’t be influenced by the treatment** (i.e., the climate zones). Otherwise, they become confounders, leading to incorrect estimates [`22`]. * We apologize for not explaining this clearly in the current draft. In the final version, we will add the additional graphical model and the non-identified baseline (attached `Figure 1`) together with a thorough explanation about treatment effect estimation with covariates and why CRL-identifiability (up to bijection) is necessary to avoid confounding. Pdf: /pdf/bc03f9987a5b6a5994c5c24cbe4d5a6c3648eb74.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DePLM: Denoising Protein Language Models for Property Optimization
Accept (poster)
Summary: This paper proposes a new method, DePLM, for supervised fine-tuning of protein language models (PLMs) for fitness prediction tasks. DePLM uses a denoising framework with a rank correlation objective to iteratively denoise PLM likelihoods and only retain the component of the likelihood that corresponds to the fitness property of interest. The paper reports that DePLM improves on previous methods for fitness prediction on ProteinGym benchmarks and a few other single-protein benchmarks. Strengths: The paper addresses an important limitation of using PLMs for fitness prediction, namely that the overall evolutionary fitness of a protein (learned by PLMs) is a combination of many different factors, some of which are irrelevant for a particular property of interest. Denoising is an intuitively appealing approach to removing such irrelevant features. In this approach, the paper presents some interesting, novel ideas: it proposes to use a denoising framework to improve PLM likelihoods for fitness prediction, and also proposes a novel method for fine-tuning with a rank correlation objective within that denoising framework. In their notion of denoising, a sorting algorithm creates a denoising path from rankings with low correlation to the ground truth to rankings with perfect correlation to the ground truth; this is an interesting idea. The paper also runs some interesting experiments to test the ability of fitness prediction methods to generalize from one protein to another (Q2 in the paper), as long as the fitness properties between the proteins are somewhat similar. Weaknesses: **Major:** 1. **More baselines and ablation studies are needed to show whether DePLM is state of the art.** DePLM incorporates structural information, but in the experiments claiming that DePLM outperforms baseline methods for protein fitness prediction (Table 1), none of the other methods are given structural information. Methods that leverage structural information, such as SaProt, should be included as other relevant baselines. Similarly, in Table 2 we see that DePLM is the only method that includes evolutionary, structural, and experimental labels, but other methods that do include evolutionary and structural information could be naturally finetuned to include experimental labels. The claim that “the architecture employed by our model … enables more efficient utilization of experimental data” does not seem sufficiently supported by the shown results. 2. **The datasets used in the experiments in the paper should be described more clearly.** In particular, the experimental results for baseline methods on ProteinGym are significantly different between this paper (Table 1) and previous papers, as well as the ProteinGym leaderboard available at https://proteingym.org/. For example, ProteinNPT reports an average Spearman correlation of 0.65 on ProteinGym when performing cross-validation on random splits of the data, but in Table 1 of this paper, ProteinNPT has Spearman correlation higher than 0.65 for every subset of ProteinGym. The paper does not explain why there is this discrepancy. 3. **Describing the method under the diffusion framework is confusing and obscures what methods should be relevant comparisons to DePLM.** The theoretical basis of diffusion models relies on specifying a forward noise process that, as the time step t grows, turns complex data distributions into a known distribution that we can sample from. Learning the reverse process allows us to sample from the complex data distribution. In this paper, there is not a stochastic forward noise process, there is no noisy distribution as t -> infinity (there is a deterministic rank ordering), and there is no data distribution at t=0 that the method is attempting to sample from (again there is a deterministic rank ordering). This method seems accurately described as a denoising model, but it is confusing to me to use the diffusion framework. 4. **There are no error bars on experimental results.** It is well known that some ProteinGym datasets are harder than others, so performance must vary significantly based on the random cross-validation split, and it is unclear how many of the results are statistically significant. 5. Overall, I think that this paper presents an interesting novel method, but needs more thorough experiments with stronger baselines to show that there is a real gain in performance from their complex architecture and modeling pipeline, compared to simpler ways of integrating sequence, structure, and experimental information together. **Minor:** * Line 133 “addictive” -> additive * Line 299 “Limitation” -> Limitations Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Do you have comparisons between DePLM and structure-aware models like SaProt and ProtSSN on protein fitness prediction? (Either with finetuning, or at least a simple baseline where they are used as part of a One Hot Encoding supervised method?) 2. Why are the baseline results on ProteinGym different between previously published results and what you show in Table 1? 3. Can you add error bars or other statistical analysis of the results? 4. Why does adding more denoising steps beyond 3 produce worse results? Can you determine the right number of denoising steps without ground truth labels to do hyperparameter tuning? 5. Do you apply the QuickSort algorithm with stochasticity? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors discuss some limitations of their work. Here are some additional points of consideration: 1. The computational complexity of their method is not directly compared with other state of the art methods, but based on the sentence, “models are trained on four Nvidia V100 32GB GPUs for 100 epochs”, it seems possible that their method is computationally much more expensive than other baselines. 2. The paper often uses “protein optimization” and “protein fitness prediction” interchangeably, but these are importantly different tasks, and the paper uses metrics traditionally associated with fitness prediction, not optimization. Spearman correlation metrics measure fitness prediction accuracy across both low and high fitness sequences, while NDCG and other similar metrics emphasize finding the best/optimal sequences out of a set. It is not clear from the experiments in the paper that you will find more optimal protein sequences using their method; generally, such experiments involve an actual optimization procedure to generate candidate sequences, and validation that these candidates are better than the candidates produced by baseline methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for the confirmation of our methodological novelty, and the constructive and valuable comments on the experiments. We have conducted addtional experiments to make the results and conclusion stronger. Here, we provide details on the comments below. >Experiments > 1. More baselines and ablation studies are needed to show whether DePLM is state of the art. > 2. There are no error bars on experimental results. Thanks for your suggestions, we have included fine-tuned versions of SaProt and ProtSSN as additional baselines in both the fitness prediction task and the generalization ability evaluation. Our revised analysis now features comprehensive statistical assessments, including mean Spearman correlation coefficients and standard deviations for each dataset, to highlight result variability. These results, detailed in the updated PDF (Tables 1 and 2), show that DePLM outperforms SaProt and ProtSSN, which incorporate evolutionary, structural, and experimental labels. Notably, DePLM achieves the smallest standard deviation on 4 out of 5 datasets, underscoring its robustness. We believe these additional results support our claim that DePLM is state-of-the-art and will incorporate them into the final draft of our manuscript. > The datasets used in the experiments in the paper should be described more clearly. Why are the baseline results on ProteinGym different between previously published results and what you show in Table 1? The discrepancy arises from differences in dataset composition. ProteinNPT in paper was evaluated on 100 DMS assays, while the leaderboard includes 216 assays, resulting in an average Spearman correlation coefficient of 0.73. Our paper focuses on a subset of 201 DMS assays due to PLM context length limitations (Appendix C.2), explaining the higher Spearman correlation observed in Table 1. >Diffusion Process > 1. Describing the method under the diffusion framework is confusing and obscures what methods should be relevant comparisons to DePLM. > 2. Do you apply the QuickSort algorithm with stochasticity? We address the concerns regarding our use of the diffusion framework with the following points: 1. Evolution optimizes multiple properties simultaneously, often obscuring the specific optimization objective of interest. Modeling the removal of irrelevant properties as a denoising process is reasonable. By framing it this way, we can leverage denoising models to effectively address the problem, aligning with the overall methodology and objectives of our research. 2. Extending the diffusion model to handle the likelihood order, deterministic at t=0 and t=\infty, represents a significant challenge and innovation in our work. We introduce a novel approach by using a sorting algorithm to identify the noise sampling space. The randomness inherent in the pivot index selection during applying the quicksort algorithm ensures that the forward process integrates the necessary stochasticity, aligning with the principles of the diffusion framework. Thus, the answer to the question about applying QuickSort with stochasticity is affirmative. > Why does adding more denoising steps beyond 3 produce worse results? Can you determine the right number of denoising steps without ground truth labels to do hyperparameter tuning? This decline occurs because a higher number of diffusion steps enhances the model's fitting capability, which also increases the risk of overfitting to the training data, as previously reported in [1, 2]. Determining the optimal number of diffusion steps without ground truth labels is challenging. However, empirical evidence suggests that DePLM requires fewer denoising steps than standard diffusion models. This efficiency stems from two key advantages: 1. **Well-informative Initialization**: Unlike standard diffusion models that transform uninformative Gaussian noise into a complex target distribution, DePLM starts with an informative evolutionary likelihood. This requires only minor adjustments to reach a property-specific likelihood, resulting in requiring fewer diffusion steps compared to those in standard diffusion models. 2. **Efficient Noise Sampling**: In standard diffusion models, Gaussian noise is injected independently into each data. However, in DePLM, noise sampling considers the overall difference between the current and target distribution. A quick sorting algorithm is employed to generate a sampling pool from which we draw noises. This allows each step to transform the distribution more efficiently, thereby reducing the number of steps needed. > Clarifying the computational complexity of DePLM DePLM offers significant computational efficiency. It predicts the fitness scores of all possible single mutants in one forward pass, while ProteinNPT requires (D/B) forward passes (D for the number of mutants and B for the batch size). For the A4GRB6_PSEAI_Chen_2020 assay, DePLM requires only 347.16 GMac, compared to ProteinNPT's 58M GMac: 11,724.83 GMac per mutant * 5001 mutants. These calculations, performed using the ptflops package, highlight DePLM's efficiency. The detailed calculation process is described in Table 4 of the uploaded PDF. > Clarification of "protein optimization" and "protein fitness prediction" Thank you for highlighting the distinction between protein optimization and protein fitness prediction. We acknowledge the important differences between these two tasks, though there are strong correlations between them. In the final version of the paper, we will revise the terminology to more accurately reflect our focus, ensuring clarity and precision in our language. Additionally, we will include a discussion on the potential for future work to explore optimization procedures that generate candidate sequences, providing a pathway to validate improvements over baseline methods. [1] Improved Denoising Diffusion Probabilistic Models. Nichol et al. ICML 2021. [2] Extracting Training Data from Diffusion Models. Carlini et al. 2023. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you to the authors for their additional results and careful response. I still feel that the use of the diffusion terminology could be better motivated and explained. As I stated in the original review, I think the denoising framework is appropriate. Denoising does not require diffusion though, and it felt more confusing than illustrative to describe the addition of noise as a diffusion process, when it has a fixed end point. With that being said, based on the additional baseline results and the other reviewer's comments, I am willing to increase my score. --- Reply to Comment 1.1.1: Title: Thanks! Comment: Thank you very much for your feedback! We are pleased that we could address your concerns.
Summary: In this work authors tackle the problem of optimizing protein sequences towards a given property. They outline limitations of existing methods using Protein Language Models within an optimization loop to optimize property as those pLMs are not tailored towards a given property. They introduce a rank-based diffusion model to fine-tune pLMs on mutation effect prediction. This scheme can be used to optimize a wild type sequence towards a given property. They evaluate their method on four datasets and reach state-of-the-art performance. They also show that their method is able to generalize across different datasets, allowing to overcome the common problem of data scarcity in protein optimization. Strengths: **Clear Motivation** - The paper is well-written and easy to follow. While I am not an expert on protein optimization, authors adequately contextualize their work and the motivation of the work is clear. - The idea to adapt a diffusion-based process towards ranking is clever and relevant in the scope of protein optimization. The method is well described and derivations are correct. The method is simple to reproduce with associated pseudo-code. **Experiments** - The experiments are clear and demonstrate the benefit of the proposed approach compared to other baselines. - Authors also provide interesting insights and discussion in Section 4.5 to further justify the necessity of filtering property-irrelevant information for protein optimization. - The benefit of the ranking objective, which is the main contribution of the paper, is clearly demonstrated by the ablation study. Weaknesses: **Choice of Architecture** - The ablation study shows very marginal improvement of using the structural information. Since this information is typically obtained through modules more expensive than sequence modules, the benefit of this part of the model is unclear. - The detailed architecture of DePLM is not provided and some hyperparameter choices look arbitrary. For instance, the choice of 3 diffusion steps is surprising and only justified by an ablation studies on 3 small datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: - Could you clarify and quantify the cost of training DePLM without the different modules in the ablation studies, to put in perspective with the associated performance gain ? - Could you provide detailed choice of parameters for DePLM, notably for the different components of the denoising module ? - The generalization experiment still shows a significant gap with training and testing on the same data source. I understand gathering new labeled data induces wet-lab costs. Would it make sense from a real-world application point of view, for a given test dataset, to combine data from other datasets and the corresponding training dataset to fine-tune DePLM ? Some typos: - l.80 "widetype" - l.200 denosing Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I believe authors adequately address limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate Reviewer ZGbQ's constructive comments, which have significantly improved our paper. Below, we address each comment in detail. > Marginal improvement from structural information The marginal improvement observed using the structural information can be attributed to the dataset selection. For GB1 and Fluorescence, the extensive number of training mutants leads to only a slight enhancement from the inclusion of structural data. Furthermore, in our additional evaluations using the data-sparse ProteinGym assays, we observed a consistent improvement in the Spearman correlation coefficient by approximately 2.7%. The updated results are presented in Table 3 of the uploaded PDF file. > The detailed architecture of DePLM is not provided and some hyperparameter choices look arbitrary. For instance, the choice of 3 diffusion steps is surprising and only justified by an ablation studies on 3 small datasets. The choice of 3 diffusion steps in DePLM is contributed to the following reasons: 1. **Well-informative Initialization**: Standard diffusion models transform uninformative Gaussian noise into a complex target distribution, requiring numerous steps to capture the transformation accurately. In contrast, DePLM starts with an initial distribution that represents an informative protein evolutionary likelihood. This initial distribution needs only minor adjustments to reach a property-specific likelihood. Thus, DePLM requires fewer diffusion steps compared to those in standard diffusion models. 2. **Efficient Noise Sampling**: In standard diffusion models, Gaussian noise is injected independently into each data. However, in DePLM, noise sampling considers the overall difference between the current and target distribution. A quick sorting algorithm is employed to generate a sampling pool from which we draw noises. This approach allows each step to transform the distribution more efficiently, thereby reducing the number of steps needed. It is important to note that while increasing the number of diffusion steps can enhance the model's fitting ability, it also introduces a risk of overfitting as reported in [1, 2]. This results in an initial improvement in model performance, followed by a decline as the number of diffusion steps continues to rise. Therefore, we empirically determined that setting the number of steps to 3 provided the optimal trade-off. To further elucidate this point, we have included a table below that demonstrates the effects of varying the number of diffusion steps: |Assay|step 1|step 2|step 3|step 4|step 5| |---|---|---|---|---|---| |BLAT_ECOLX|0.699|0.780|0.796|0.787|0.771| |CALM1_HUMAN|0.252|0.338|0.339|0.338|0.308| |DLG4_RAT|0.852|0.858|0.853|0.851|0.839| |DYR_ECOLI|0.709|0.737|0.718|0.728|0.731| > Clarify and quantify the cost of training DePLM, to put in perspective with the associated performance gain. In our experimental setup, training is conducted solely with the Feature Encoder and the Denoising Block. Each training process involves one forward pass through the Protein Language Model (PLM), incurring a cost of **180.56 GMac**, and one forward pass through the Structure Encoder, incurring a cost of **77.55 GMac**. The computational overhead of the PLM is essential as it provides the evolutionary distribution as the initial state for denoising. From the analysis of the additional results, we observe that the structural encoder contributes a performance improvement of approximately 0.02. Over 100 epochs, the computational cost for the Feature Encoder amounts to **4101 GMac** (= 100 epochs * 41.01 GMACs per epoch). For the Denoising Block, the cost totals **4803 GMac** (= 100 epochs * 3 steps * 16.01 GMACs per forward). These components contribute performance improvements of 0.08 and 0.34, respectively, on the ProteinGym dataset. Overall, these designs are beneficial in enhancing performance. The additional computational cost of the different modules is minimal, making it a worthwhile trade-off for the observed performance improvement. > Could you provide detailed choice of parameters for DePLM, notably for the different components of the denoising module? For the feature encoder in DePLM, we set the sequence state dimension to 1280 and the attention head size to 32. The pairwise-residue state dimension is 32, with a matching attention head size. We apply a dropout rate of 0.2. In the denoising block, the MLP hidden dimension for converting likelihood to representation is set to 1280, using GELU activation. These parameter choices were systematically determined through extensive experimentation and cross-validation to optimize the performance of DePLM while ensuring computational efficiency. For a comprehensive description of the implementation details, please refer to the Supplementary Material (Code: archive>src>models>DePLM_module.py). > Would it make sense from a real-world application point of view, for a given test dataset, to combine data from other datasets and the corresponding training dataset to fine-tune DePLM? To explore this, we performed experiments using two different mixing strategies to construct the training dataset: (1) combining data from other datasets with 1/4 of the corresponding training data, and (2) using only 1/4 of the corresponding training data. The results of these experiments are summarized in the table below: |Dataset|Combined dataset|Only training dataset| |---|---|---| |A4GRB6_PSEAI|0.845|0.849| |CAPSD_AAV2S|0.619|0.584| |DLG4_RAT|0.722|0.717| |GLPA_HUMAN|0.783|0.722| The results indicate that when the corresponding training data is insufficient, incorporating data from other datasets with similar properties significantly improves model performance. We will include these findings in the final draft to emphasize the practical benefits of data integration. [1] Improved Denoising Diffusion Probabilistic Models. Nichol et al. ICML 2021. [2] Extracting Training Data from Diffusion Models. Carlini et al. 2023. --- Rebuttal Comment 1.1: Title: Answer to authors' rebuttal Comment: I appreciate the authors' clarification on the ablation studies and believe these additional results will further strengthen their work. Authors also adequately addressed my other comments. This confirms my initial assessment and I believe the paper should be accepted. --- Reply to Comment 1.1.1: Title: Thanks! Comment: Thank you for your valuable feedback! We are glad that our responses have successfully resolved your concerns.
Summary: In this work, the authors propose Denoising Protein Language Models (DePLM) to enhance protein optimization by refining evolutionary information (EI) in protein language models (PLMs). Since traditional methods struggle with considering multiple functional properties simultaneously and lack generalizability to novel proteins due to experimental condition-specific measurements. DePLM addresses these issues by denoising EI to remove irrelevant information, improving model generalization and ensuring dataset-agnostic learning. Experimental results demonstrate DePLM's superior performance in mutation effect prediction and generalization to new proteins. Strengths: 1. The work develop diffusion model for protein property optimization which is a novel application. 2. The framework adapts important domain knowledge like rank-based measurement and fusion structure/sequence features. 3. The model achieves superior performance in several benchmarks. Weaknesses: 1. The computational cost of the proposed method has been well discussed. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The framework relies on a sorting algorithm to define the forward diffusion process. What is the computational cost? Will it be an obstacle for applying the proposed method to problems at scale? 2. Following the previous question, how does the computational cost (flops and # parameters) compare with baseline models? 3. DePLM only uses 3 diffusion steps, which is much less than standard diffusion model. I wonder what would happen if more steps are used? Also, what is the reason that DePLM doesn't need multiple sampling steps? 4. In Table 1, DePLM (ESM2) achieves superior performance. Does the authors by any chance have the results with ESM2 alone? 5. In Table 3, it is shown that structural information doesn't make big difference in performance. What could be the cause of this? In Figure 3c, it seems the correlation increases with forward diffusion process. Should it be in the opposite way? Maybe I didn't fully understand it, buts when noise is added, the correlation between the optimal $\Pi^*$ and current $\Pi$ should decrease. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors sufficiently addressed the limitations in the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 5NpU for your insightful feedback. We have addressed your concerns below and hope our responses provide clarity: > Computational Cost > 1. The framework relies on a sorting algorithm to define the forward diffusion process. What is the computational cost? Will it be an obstacle for applying the proposed method to problems at scale? > 2. Following the previous question, how does the computational cost (flops and # parameters) compare with baseline models? We utilize the QuickSort algorithm, which has a time complexity of $O(n\log n)$. Given the sparsity of labels, sorting the assay with the most labels in ProteinGym (\~500k mutants) takes only 1.45 seconds on 2.8GHz Quad-Core Intel Core i7, while sorting the assay with the median number of labels (\~5k mutants) takes just 0.0056 seconds. By leveraging wildtype marginal probability, our method can predict the fitness scores of all possible single mutants in a single forward pass. In contrast, the state-of-the-art model ProteinNPT requires (D/B) forward passes to predict the fitness landscape of an assay, where D is the number of data points and B is the batch size. For predicting the A4GRB6_PSEAI_Chen_2020 assay's fitness landscape, ProteinNPT requires **58.5M GMACs with 219M parameters**, while our DePLM only requires **347.16 GMACs with 834M parameters** (792M non-trainable and 42.2M trainable). The detailed calculation process is described in Table 4 of the uploaded PDF. Overall, DePLM proves to be an effective and efficient method for predicting protein fitness landscapes and the inclusion of the ranking algorithm does not hinder scalability compared to baseline models. > Diffusion Step > 1. DePLM only uses 3 diffusion steps, which is much less than standard diffusion model. I wonder what would happen if more steps are used? Also, what is the reason that DePLM doesn't need multiple sampling steps? > 2. In Figure 3c, it seems the correlation increases with forward diffusion process. Should it be in the opposite way? Maybe I didn't fully understand it, buts when noise is added, the correlation between the optimal $\Pi^{\star}$ and current $\Pi$ should decrease. The few diffusion steps in DePLM can be attributed to the following reasons: 1. **Well-informative Initialization**: Standard diffusion models transform uninformative Gaussian noise into a complex target distribution, requiring numerous steps to capture the transformation accurately. In contrast, DePLM starts with an initial distribution that represents an informative protein evolutionary likelihood. This initial distribution needs only minor adjustments to reach a property-specific likelihood. Thus, DePLM requires fewer diffusion steps compared to those in standard diffusion models. 2. **Efficient Noise Sampling**: In standard diffusion models, Gaussian noise is injected independently into each data. However, in DePLM, noise sampling considers the overall difference between the current and target distribution. A quick sorting algorithm is employed to generate a sampling pool from which we draw noises. This approach allows each step to transform the distribution more efficiently, thereby reducing the number of steps needed. Increasing the number of diffusion steps leads to a deterioration in model performance, as illustrated in Figure 8. This decline occurs because a higher number of diffusion steps enhances the model's fitting capability, which increases the risk of overfitting to the training data as reported in [1, 2]. To further elucidate this point, we present the performance metrics of the model on both the training and test datasets in the table below: | Step | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | | --- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | | Training Spearman | 0.849 | 0.864 | 0.866 | 0.881 | 0.882 | 0.883 | 0.884 | 0.886 | | Testing Spearman | 0.694 | 0.712 | 0.716 | 0.685 | 0.587 | 0.575 | 0.576 | 0.567 | Figure 3c illustrates the relationship between the Spearman coefficient of evolution likelihood and intermediate rank variables as a function of the number of forward steps. The caption might be a bit confusing but we will correct it the final version. > Model Architecture > 1. In Table 1, DePLM (ESM2) achieves superior performance. Does the authors by any chance have the results with ESM2 alone? > 2. In Table 3, it is shown that structural information doesn't make big difference in performance. What could be the cause of this? The PLM-based results reported in Table 1 are sourced from the ProteinGym Leaderboard [https://proteingym.org/](https://proteingym.org/), which does not include ESM2-based results. Here, we report ESM2 results obtained by our own experiments and will include these in the final draft. | ProteinGym | Stability | Fitness | Expression | Binding | Activity | | ------------ | --------- | ------- | ---------- | ------- | -------- | | ESM2 | 0.882 | 0.563 | 0.645 | 0.587 | 0.576 | | DePLM (ESM2) | 0.897 | 0.707 | 0.742 | 0.764 | 0.693 | Regarding the second question, the minimal performance difference observed with structural information can be attributed to the dataset selection. When considering label-richGB1 and Fluorescence datasets, DePLM only shows a slight improvement by incorporating the structural data. To further investigate the role of structural information, we conducted additional evaluations using the label-sparse ProteinGym assays. The updated results are presented in Table 3 of the uploaded PDF file, which demonstrates a consistent enhancement in the Spearman correlation coefficient of approximately **2.7%** when incorporating structures. [1] Improved Denoising Diffusion Probabilistic Models. Nichol et al. ICML 2021. [2] Extracting Training Data from Diffusion Models. Carlini et al. 2023. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for the detailed response which have addressed most of my concerns. I still positive about this submission. --- Reply to Comment 1.1.1: Title: Thanks! Comment: Thank you immensely for your feedback! We are gratified to know that we have successfully addressed your concerns.
null
null
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers' thorough and constructive feedback on our manuscript. In response, we have conducted a series of additional experiments and analyses to address your concerns and strengthen our paper. Below, we provide an overview of the new results included in the uploaded PDF: 1. **Additional Baselines**: In Tables 1 and 2, we have supplemented the results with fine-tuned ProtSSN and SaProt models on the protein fitness prediction task (Q1) and the generalization ability evaluation (Q2). In addition to the average Spearman coefficient, we also include the average standard deviation. We observe that DePLM outperforms these models, which consider sequential, structural, and experimental signals simultaneously, demonstrating the superiority of the proposed model architecture. 2. **Extended Ablations**: In Tables 3, we have included more datasets to verify the necessity of incorporating structural information (Q3). The results show that introducing structural information consistently and significantly improves the model's performance. 3. **Computational Cost Analysis**: In Table 4, we use the A4GRB6_PSEAI_Chen_2020 assay as a case study to compare the computational cost and parameter count of DePLM and ProteinNPT in both training and inference phases. The results indicate that our proposed method is much more efficient. In the following sections, we present a detailed point-by-point response to the questions raised. Pdf: /pdf/714674d911ed703e3d5dfac2ef155bcd7cfdaf55.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Aligning Diffusion Models by Optimizing Human Utility
Accept (poster)
Summary: This paper introduces Diffusion-KTO, a novel approach for aligning text-to-image diffusion models with human preferences using per-image binary feedback (like likes/dislikes) rather than pairwise preference data. The key contributions include: 1. Extending the human utility maximization framework used to align language models to the domain of diffusion models. 2. Developing an objective that allows training on per-image binary feedback rather than pairwise preferences, enabling the use of abundant internet data like likes/dislikes. 3. Demonstrating through experiments that Diffusion-KTO outperforms existing alignment approaches like Diffusion-DPO, as judged by both automated metrics and human evaluators. 4. Showing that Diffusion-KTO can align models to specific user preferences using synthetic experiments. The authors fine-tune Stable Diffusion models using their Diffusion-KTO objective on datasets like Pick-a-Pic. They evaluate the aligned models using automated metrics and human studies, comparing them against baselines like supervised fine-tuning and other alignment methods. The results indicate that Diffusion-KTO produces images that are preferred by humans and score higher on various automated metrics compared to existing approaches, while only requiring simpler per-image feedback data. The paper also discusses limitations, broader impacts, and potential misuse of the technology. Overall, Diffusion-KTO presents a new framework for improving text-to-image models using readily available preference signals. Strengths: The paper demonstrates originality in several ways: 1. It extends the utility maximization framework from language models to diffusion models, representing a novel cross-domain application of ideas. 2. It introduces a new alignment objective that can work with per-image binary feedback rather than pairwise preferences, opening up new possibilities for using abundant internet data. 3, The approach allows for customization of text-to-image models to individual user preferences, which is an innovative direction in this field. In terms of the quality of the approach and execution: 1. The authors provide a comprehensive evaluation, using both automated metrics and human studies to assess their method. 2. They compare against multiple baselines and state-of-the-art methods, demonstrating rigor in their experimental design. 3. The paper includes ablation studies. 4. The authors are transparent about limitations and potential negative impacts, The paper is generally well-structured and clear: 1. The methodology is explained in detail, with the objective function presented. 2. Visual results are provided to illustrate the improvements, aiding in understanding. 3. The paper includes a detailed appendix with additional results and implementation details, supporting reproducibility. The work is significant in how it improves text-to-image models in the setting when there is no preference data, but rather binary good/bad examples. The ability to customize models to individual preferences could have broad implications for personalized AI systems. Weaknesses: The paper acknowledges that the method may learn biases from skewed preference data, but doesn't provide a detailed analysis of this issue. A more thorough examination of how different biases in the feedback data affect the aligned model would be valuable. In some sense, the method itself is not novel as it has been just applied to the text-to-image setting where other preference-tuning methods have shown promise and similar methods from the NLP research can potentially be applied to this setting in the future, e.g. BCO or SimPO etc. The experiments primarily focus on Stable Diffusion v1-5 and v2-1. Including a wider range of diffusion models, as they are generally available via the diffusers library would better demonstrate the generalizability of the approach. While the paper demonstrates the potential for personalizing models to individual preferences, this is only shown through synthetic experiments. Technical Quality: 3 Clarity: 3 Questions for Authors: Did you use the trainers from the diffusers library to base your model on? How does the effectiveness of Diffusion-KTO vary with different types of prompts? Is it more effective for certain categories of images or styles of prompts? Have you considered extending the method to incorporate other types of feedback beyond binary signals? For instance, could it be adapted to use scalar ratings or textual feedback? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper includes a dedicated "Limitations" section (Section 7), which addresses several key points: 1. It acknowledges that Diffusion-KTO inherits limitations of the base text-to-image models. 2. It notes that the preference data used (Pick-a-Pic dataset) may contain biases or inappropriate content. 3. The authors recognize that the choice of utility function remains an open question. 4. They acknowledge that their model may propagate negative stereotypes present in the training data. The authors discuss both potential positive and negative societal impacts: Positive impacts: 1. Improved alignment of text-to-image models with human preferences. 2/ Potential for personalized image generation systems. Negative impacts: 1. Risk of propagating biases present in preference data. 2. Potential for misuse in generating inappropriate or harmful imagery. 3. Concerns about the model inheriting weaknesses of the base text-to-image model, including generating images that reflect negative stereotypes. The authors have made a good-faith effort to address limitations and societal impacts. They've covered key points such as data biases, potential misuse, and ethical considerations in experimentation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review! We are excited to hear that you found our method innovative and significant to improving text-to-image models. We are also happy to hear that you appreciated our comprehensive experiments and overall presentation. Please find our responses below. **A more thorough examination of how different biases in the feedback data affect the aligned model would be valuable.** To understand the effect of aligning with Pick-a-Pic v2, which is known to contain some NSFW content, we run a CLIP-based NSFW safety checker on images generated using test prompts from Pick-a-Pic v2 and HPSv2. For Pick-a-Pic prompts, 5.4% of Diffusion-KTO generations are marked NSFW, and 4.4% of SDv1-5 generations are marked NSFW. For HPSv2 prompts, which are safer, 1.3% of Diffusion-KTO generations are marked NSFW, and 1.0% of SD v1-5 generations are marked NSFW. Overall, training on the Pick-a-Pic dataset leads to a marginal increase in NSFW content. We observe similar trends for Diffusion-DPO, which aligns with the same preference distribution (5.8% NSFW on Pick-a-Pick & 1.3% NSFW on HPSv2). We would like to emphasize that our method is agnostic to the choice of preference dataset, as long as the data can be converted into binary per-sample feedback. We used Pick-a-Pic because of its size and to fairly compare with related works. **In some sense, the method itself is not novel... similar methods from the NLP research can potentially be applied to this setting in the future, e.g. BCO or SimPO etc.** Regarding BCO and SimPO, we agree that these works show promising results in the NLP domain. However we find it infeasible to implement and evaluate these methods for text-to-image generation in such a short rebuttal period. We will investigate the effectiveness of these approaches in future works. Please see the “Novel Contributions of Diffusion-KTO” of our main rebuttal for further details regarding novelty. **Including a wider range of diffusion models, as they would better demonstrate the generalizability of the approach.** We provide results using two different models (SD v1-5 and SD v2-1) in our main paper and Appendix. These results highlight the generality of Diffusion-KTO for various text-to-image diffusion models. We agree that it would be interesting to see how our method works for other state-of-the-art models. However, since recent models use more complicated architectures (e.g., Multi-Modal Diffusion Transformer of SD v3.0) and significantly more parameters, it is not feasible to complete these experiments within a week. We leave these endeavors for future work. **While the paper demonstrates the potential for personalizing models to individual preferences, this is only shown through synthetic experiments.** While binary preferences are easier to collect than paired preferences, curating high-quality personalized preference data is still expensive. We explored the possibility of scraping likes and dislikes from websites such as Artstation, but this does not comply with their terms of service. Additionally, it is hard to evaluate whether the model can effectively align with personalized preferences due to the diverse nature of human interests. In contrast, synthetic experiments are more controllable with measurable quantitative metrics such as aesthetic scores (Appendix C). For these reasons, we left the task of curating a high-quality personalized feedback dataset for future works. **Did you use the trainers from the diffusers library?** We refer the reviewer to our sample code in supplementary material for implementation details. We do not use the trainers. **How does the effectiveness of Diffusion-KTO vary with different types of prompts?** Below, we provide a per-style score breakdown using the prompts and their associated styles (Animation, Concept-art, Painting, Photo) in the HPSv2 test set. Across these metrics, our model performs best for "painting" and "concept-art" styles. We attribute this to our training data. Since Pick-a-Pic prompts are written by users, it will reflect their biases, e.g., a bias towards artistic content. Such biases are also noted by the authors of HPSv2 who state "However, a significant portion of the prompts in the database is biased towards certain styles. For instance, around 15.0% of the prompts in DiffusionDB include the name “Greg Rutkowski”, 28.5% include “artstation”." We also observe that different metrics prefer different styles. For example, the "photos" style has the highest PickScore but the lowest ImageReward. With this in mind, we would like to underscore that our method, Diffusion-KTO, is agnostic to the preference distribution (as long as feedback is per-sample and binary) and, training on different, less biased preference data could avoid such discrepancies. | Style | Aesthetic | PickScore | ImageReward | CLIP | HPS | |:-----------:|:---------:|:------:|:------------:|:------:|:-----:| | anime | 5.493 | 21.569 | 0.716 | 34.301 | 0.368 | | concept-art | 5.795 | 21.011 | 0.804 | 33.141 | 0.359 | | paintings | 5.979 | 21.065 | 0.802 | 33.662 | 0.360 | | photo | 5.365 | 21.755 | 0.471 | 31.047 | 0.332 | **Have you considered extending the method to incorporate other types of feedback beyond binary signals?** While we have considered exploring a variety of human feedback, this work, Diffusion-KTO, focuses specifically on binary feedback. For continuous feedback, we show in Appendix C an example where Diffusion-KTO can effectively align with the preference distribution after the continuous feedback signal is thresholded into a binary one. Textual feedback is non-trivial to implement in such a short rebuttal period, and we leave this for future work.
Summary: This paper (DKTO) combines D3PO and KTO. D3PO let's us apply DPO to Diffusion models using pairwise preferences, and KTO is a way to align generative models (specifically autoregressive LLMs) using pointwise preference. For example, this gives us a way to tune text-conditioned, image generative models from thumbs-up, thumbs down or star ratings type of bandit feedback, without training a secondary reward model. DKTO works by optimizing the expected utility of the advantage of the new policy versus a reference policy. And empirically, its results on finetuned Stable diffusion are pretty compelling. However, the mathematical derivation of DKTO is a bit messy and not rigorous. Strengths: The results in the paper are compelling and the contributions in the paper seem to be novel. AFAICT relevant baselines were considered for experiments and DKTO seems to win handily against them. Weaknesses: Section 4.1 which derives the actual diffusion KTO method is really short and seems wrong on first glance. To go from eq (6) to Eq(7) the relation Q* = β (log π_θ - log π_ref) is substituted but that relation is lifted from the D3PO paper that derived it under the assumption that the policy optimization objective is E[Q*] - β KL[π_θ || π_ref] . But the diffusion KTO objective is different so it's not clear why prop 1 from the DKTO paper still applies. Secondly, the substitution Q_ref = β KL is not very well motivated. The notation Q_ref suggests that it should not have had any dependence on π_θ at all !! Why is the KL divergence a reasonable proxy for Q_ref ? Overall, I think the paper in its current form is weak from a theory/conceptual point of view, and the method needs to be motivated a lot more cleanly. Please let me know if I have made a mistake, I'll gladly update my scores. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Yes, authors have addressed the limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We are glad that you appreciated the contributions and comprehensive experiments presented in this paper. Regarding the concerns on specific formulations, we have provided a detailed review of our formulation in the main rebuttal that should address these concerns. The relevant parts are summarized as follows: **It's not clear why prop 1 from the DKTO paper still applies.** We assume you are referring to prop1 from the D3PO (not DKTO) paper. It is sufficient to establish this relation under the policy optimization objective. The “substitution” is not wrong because KTO defines its implicit reward function $r_\theta(X_0,C)=\beta\log(\pi_{\theta}(X_0|C)/\pi_{ref}(X_0|C))$ in an axiomatic approach (Definition 3.4 of KTO paper). KTO justifies such definition through a “classic prospect theory experiment”, and the fact that $r_\theta$ is in the same equivalence class as the human-preference-induced reward function in (eq 2) at optimal policy $\pi^*_\theta$ under RLHF objective (eq 2). In prospect theory, $r_\theta$ "amounts to the dollar amount assigned to each outcome." However, the KTO formulation does not imply that the KTO objective is strictly equivalent to the RLHF objective in a similar way as DPO. There is no "substitution" in KTO formulation. By "applying the relation $Q^∗(a, s)$" in sec 4.1, we meant that we adopt the implied quality function definition $Q_\theta(X_{t-1},X_{t},C)=\beta\log(\pi_{\theta}(X_{t-1},X_{t},C)/\pi_{ref}(X_{t-1},X_{t},C)$ based on the relation $Q^*(X_{t-1},X_{t},C)= \beta\log(\pi^*_\theta(X_{t-1}|X_{t},C)/\pi_{ref}(X_{t-1}|X_{t},C))$, as well as the connection between $Q^*$ and the RLHF objective established by D3PO. We provide further details in the main rebuttal. If you have any additional questions, feel free to let us know and we are happy to answer them in discussion period. **Secondly, the substitution $Q_{ref}=\beta D_{KL}$ is not very well motivated.** The reference point of KTO is defined as $z_0=E_D[r_\theta(X_0,C)]$, which is the expected reward over some distribution $D$ of $(X_0,C)$. This definition stems from the assumption that "rather than having just one dispreferred generation serve as the reference point z0", "humans judge the quality of [generation] in relation to all possible input-output pairs." Similarly, we define $Q_{ref}$ as $E_{D'}[Q_\theta(X_{t-1},X_{t},C)]$ over some distribution $D'$ of $X_{t-1},X_{t},C,t$. KTO set $D$ to be a uniform distribution over the input dataset, and the reference point simplifies to the expected KL divergence. Following an identical derivation, we can establish that $Q_{ref}$ in Diffusion-KTO formulation simplifies to the expected KL divergence. We provide further details in the main rebuttal. We will add these details in future versions and apologize for any confusion caused by this omission. Note: In the above discussion, we replace the term $l(y)$, the normalization factor in the definition of $r_\theta(X_0,C)$, with a constant $\beta$ following Eq(6) of KTO paper for simplicity. We also replace the notation $x,y$ to $C,X_0$ for consistency with notations of diffusion models.
Summary: This paper presents Diffusion-KTO, a novel preference learning algorithm for diffusion models. The proposed preference learning algorithm is based on Kahneman & Tversky Optimization (KTO). Diffusion-KTO enables aligning a diffusion model using only binary pointwise feedbacks, improving data efficiency and robustness. The experiments show promising results. Strengths: * Aligning a diffusion model from human feedback is an important problem that receives huge attention from the community. * The experimental results are promising and interesting. * Overall presentation of the paper is clear, with some parts requiring improvements (see Weakness) Weaknesses: * The proposed algorithm is almost a simple application of KTO to diffusion models. The originality of the contribution is, therefore, not very strong. Of course, the combination of diffusion models and KTO is indeed novel; somebody else might have done it soon if this paper hadn't. I acknowledge the novelty, but the paper would have been stronger with more original ideas. * The paper has to be more self-contained. The paper borrows a lot of components from existing works, such as KTO and D3PO, and the borrowed elements should be explained well. Section 3.3 is particularly dissatisfying. There is no rigorous definition of Q, no definition of pi^*, and no explanation of why Equation 4 is an approximate objective. In a similar spirit, Section 4.1 could be more informative, such as providing an explanation for why Q_ref has to be set as proposed. * The major concern regarding the experiment is that the number of human subjects and human responses are not disclosed (please correct me if I am wrong). This information is required to judge the uncertainty of the winning rate presented in Figure 4. It would be great if the uncertainty of the winning rate could also be provided. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. I would adjust my rating based on the response regarding the weaknesses of the paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper addresses its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We are happy to hear that you find our problem area important and that you appreciate the presentation of our work and our experimental results. Please see our responses below. **The proposed algorithm is almost a simple application of KTO to diffusion models.** Please see the “Novel Contributions of Diffusion-KTO” of our main rebuttal for our response. **The paper has to be more self-contained.** In regards to the specific concerns raised (i.e. the substitution of $Q$ and definition of $\pi^*$, and $Q_{ref}$), we provide a detailed review of our Diffusion-KTO formulation with clarifications for these definitions and choices in the main rebuttal (Clarification of Formulation). We hope these discussions can resolve your concerns. If you have any additional questions, please let us know and we would be happy to answer them in the discussion period. **The number of human subjects and human responses are not disclosed.** We are sorry for this oversight. We collected 300 human responses. The 95% confidence interval is 65.6%-74.8% for the win-rate against DPO and 73.6%-82.9% for the win-rate against SDv1-5. Both results are significant, with p-value < 0.001. --- Rebuttal Comment 1.1: Title: Thanks for reply. Comment: Thanks for the response. Your answer addresses my questions well, and it is impressive to see the statistical significance of the result.
Summary: This paper proposes Diffusion-KTO, which extends the KTO theory to develop a preference optimization algorithm for Strengths: * The derivations are clean, direct, and seem reasonably principled to me. * The experimental results are good and show a clear improvement over preexisting work. However, I would include more quantitative examples in the main paper. * Aligning diffusion model methods is a reasonably impactful area, and doing so with novel techniques is nontrivial and thus requires works like this one. Weaknesses: * While I enjoyed the paper, the novelty is somewhat limited. In particular, the paper is a relatively direct combination of the Diffusion DPO type methods and novel DPO style losses. Technical Quality: 3 Clarity: 3 Questions for Authors: * Can you include an example without the use of a preference pair dataset? Pick-a-pic is paired, but the main benefit of KTO should be the ability to extend beyond paired data (which might be useful for diffusion models since generation is expensive). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review! We are excited to hear that you appreciate the importance of our problem statement, the strength of our experimental results, and the presentation of our method. Please find our responses below. **The novelty is somewhat limited** Please see the “Novel Contributions of Diffusion-KTO” of our main rebuttal for our response. **Can you include an example without the use of a preference pair dataset**. We provide results when using two protocols for sampling at the population level of our data. We tried a) for each prompt in the dataset, incorporate both winner and loser separately as training data and b) for each prompt in the dataset, keep either the winner or loser as training data. Empirically, we find no significant differences between the performance of these two protocols. This effect is also seen by the authors of the original KTO paper. Specifically, we find the second protocol (b) performs slightly better on Aesthetics (+0.02), PickScore (+0.04), HPS (+0.002), CLIP (+0.13), and ImageReward (+0.05). These differences, apart from ImageReward, are within the error bound of estimates. The second setup (b) could be marginally better than the first setup (a) as it may remove some relative noise within paired preference data. In summary, we did not train with paired preferences and, we find that incorporating both the winner and loser separately or incorporating either the winner or loser into our training data generally leads to no significant difference in performance. Additionally, we would also like to point out that we tried two toy experiments using binary preferences in Appendix C. In these experiments, we created two synthetic datasets with binary labels by a) using a red filter and setting red-filtered images as desirable and non-filtered ones as undesirable, and b) thresholding the LAION aesthetic score and labeling samples with high scores as desirable and samples with low scores as undesirable. Results show that Diffusion-KTO can effectively align to binary preferences at the user level.
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive feedback. We are glad to hear that the reviewers recognize our strong experimental results (Reviewers MiLc, 7SUT, s9JR, YybC), the importance of learning from binary preference data (Reviewers MiLc, 7SUT, YybC), and the novelty of extending the utility maximization framework to the setting of diffusion models (Reviewers s9JR & YybC). In this main rebuttal, we address the common concern of novelty and clarify some common confusions on theory formulation. For other specific questions, we refer the reviewers to individual rebuttals. **Novel Contributions of Diffusion-KTO** We would like to emphasize the novel contributions of our work. Our work is the first to extend the utility maximization framework to the setting of diffusion models (as noted by Reviewers s9JR & YybC). Unlike the original KTO paper, we explore the effects of various utility functions and provide an analysis of their performance (see Section 6 and Appendix D.4). As we are extending this framework to a novel domain (text-to-image diffusion models), we perform such experiments to avoid naively assuming that the Kahneman & Tversky model would work best. Last but not least, we would like to highlight the potential impact of our method. Diffusion-KTO demonstrates that text-to-image diffusion models can be aligned with human preferences using binary preference data and can even outperform methods that require paired data. This is a new capability that, to our knowledge, was not previously available as the best alignment methods required paired preference data. As a result, this opens a world of possibilities as binary feedback is abundant on the internet and can be collected at scale. **Clarification of Formulation** We apologize for these oversights. A diffusion process from an image $X_0$ to random noise $X_T$ induces a conditional distribution $P(X_0|C)=\prod_{t=1}^{T}P(X_{t-1}|X_t)P(X_T|C)$ where C is the prompt. $P(X_T|C)$ can be written as $P(X_T)$ which is an i.i.d. Gaussian distribution. We can induce a global policy $\pi(X_0|C)$ representing the whole sample process, as well as a local policy $\pi(X_{t-1}|X_{t},C)$ representing each sample step. A reward function $r(X_0,C)$ is a real-valued function that encodes human preference. Every possible $r(X_0,C)$ induces an optimal global policy $\pi^*(X_0|C)$ that maximizes this reward. A "quality" function $Q(X_{t-1},X_{t},C)$ is a real-valued function that similarly induces an optimal local policy $\pi^*(X_{t-1}|X_{t},C)$. Conceptionally, $Q$ can be an arbitrary real-valued function. $Q^*$ is a special choice of $Q$ such that the induced local policy maximizes the expected global reward $r(X_0,C)$. Hence the (Eqn. 4) is an approximation of RL objective (Eqn. 2). D3PO shows the relation $Q^*(X_{t-1},X_{t},C)= \beta\log(\pi^*_\theta(X_{t-1}|X_{t},C)/\pi_{ref}(X_{t-1}|X_{t},C))$. Following MDP convention, we can consider ($X_{t},C$) as a state $s$ and $X_{t-1}$ as an action $a$, and use $(s,a)$ in the notation instead. These formulations are elaborated in Sec 4.1 of the D3PO paper, we will incorporate more details in the future version of our paper. In Diffusion-KTO, $Q$ and $Q_{ref}$ in (Eqn. 4) follow the definitions proposed in the original KTO paper. KTO adopted an axiomatic approach and defined the implicit reward function as $r_\theta(X_0,c)=\beta\log(\pi_{\theta}(X_0|C)/\pi_{ref}(X_0|C))$. The justification of this definition stems from classic prospect theory experiments, as well as the observation that $r_\theta$ is in the same equivalence class as the human-preference-induced reward function in (Eqn. 2) at optimal policy $\pi^*_\theta$ under RLHF objective (Eqn. 2). This formulation can be found in Sec 3.2 of KTO paper. Unfortunately, this formulation is intractable as it involves the global policy $\pi(X_0|C)$, and cannot be directly applied. Similarly, we can define implicit quality function $Q_\theta(X_{t-1},X_{t},C)=\beta\log(\pi_{\theta}(X_{t-1},X_{t},C)/\pi_{ref}(X_{t-1},X_{t},C)$, since results of D3PO have established the relation of $Q^*$ and the optimal policy. The reference point of KTO is defined as $z_0=E_D[r_\theta(X_0,C)]$, which is the expected reward over some distribution $D$ of $(X_0,C)$. This definition stems from the assumption that "rather than having just one dispreferred generation serve as the reference point z0", "humans judge the quality of [generation] in relation to all possible input-output pairs." Similarly, we define $Q_{ref}$ as $E_{D'}[Q_\theta(X_{t-1},X_{t},C)]$ over some distribution $D'$ of $X_{t-1},X_{t},C,t$. KTO set $D$ to be a uniform distribution over the input dataset, and the reference point simplifies to the expected KL divergence. Following an identical derivation, we can establish that $Q_{ref}$ simplifies to the expected KL divergence. We will add these details in future versions and apologize for any confusion caused by this omission. Note: In the above discussion, we replace the term $l(y)$, the normalization factor in the definition of $r_\theta(X_0,C)$, with a constant $\beta$ following Eq(6) of KTO paper for simplicity. We also replace the notation $x,y$ to $C,x_0$ for consistency with notations of diffusion models.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Analysis of Corrected Graph Convolutions
Accept (poster)
Summary: This paper studies the effects of removing the top eigenvector of the adjacency matrix used for aggregation in graph convolutions. For the contextual stochastic block model (CSBM), this is theoretically shown to be beneficial. The authors provide several theoretical statements describing misclassification ratios and chances and chances of achieving linear separation. Experiments confirm the superiority of removing the smooth eigenvector for the CSBM. Strengths: * The background, literature review, and preliminaries are nicely presented. * The ideas behind the proofs are interesting. * The conducted experiments confirm the claimed benefits of removing the top eigenvector for CSBM. Weaknesses: * The structure of the paper is confusing. In Section 4, two Theorems are provided without proofs. Proofs are also not provided in the Appendix. Then, there is a proof sketch for the Theorems in Section 6. * A clear focus would help the accessibility of this work. It would suffice to consider either $\hat{A}$ or $\tilde{A}$ and Theorem 4.1 or Theorem 4.2 while providing more details for the selected case. * Theoretical statements and their implications are hard to understand. Clearly stating all symbols and providing additional details would help. * The implications of this work are not clear to me. We know that $M^kx$ gets dominated by the top eigenvector of M with rate $\lambda_2/\lambda_1$. Setting the largest eigenvalue to zero, results in dominance of the second eigenvector. As pointed out in this work, this eigenvector corresponds to sparse and balanced bipartitions. This analysis seems like a complicated way to come to the same conclusion. * The practical usage of removing the top eigenvector seems very limited, as one dominating signal is exchanged for another. This is confirmed in Figure 3. Technical Quality: 2 Clarity: 1 Questions for Authors: * What are the benefits of this analysis against existing insights that the signal corresponding to the top eigenvector of a matrix gets amplified the most and dominates representations? * What are potential practical implications that can now be developed? * l. 332: Why is the O-notation used for $p=O(log^3 n/n)$? Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: Limitations are not explicitly presented. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We answer the concerns below. > The structure of the paper is confusing. Two Theorems are provided without proofs. Proofs are also not in the Appendix. We prove both our theorems rigorously in the appendix. For the proof of Theorem 4.1, we refer to Appendix C (Title: Proofs in Section 6). Please check line 611, where we start by proving the essential lemmas, followed by the proof of Theorem 4.1 in line 662. For Theorem 4.2, the sketch is in section 7 (line 277) and the full proof is in Appendix D titled `Proofs in Section 7 (line 676)`. The explicit proof of Theorem 4.2 is given in `line 760`. We will point to the proofs explicitly with hyperlinks in the main paper in our revision. > It would suffice to consider either $\hat{A}$ or $\tilde{A}$ and Theorem 4.1 or Theorem 4.2. Thank you for the suggestion. We believe that both Theorem 4.1 and 4.2 are important. In community detection results for the SBM, it is customary to study both exact and partial recovery results (see \[E. Abbe, Community Detection and Stochastic Block Models, JMLR 2018\]). In our paper, we provide answers to such customary questions for node classification for the CSBM, where we also have node feature information in addition to the graph. > Theoretical statements and their implications are hard to understand. Clearly stating all symbols and providing additional details would help. We provide an extensive discussion on the theorems and their meaning in terms of the number of convolutions and the SNR in the data (see lines 170 to 188 for Theorem 4.1, and lines 193 to 202 for Theorem 4.2). It would help if the reviewer could point us to the part that is hard to understand so we can explain it better and modify our paper accordingly. It would be helpful if the reviewer could tell us which symbols and additional details are required. > The implications of this work are not clear to me. Setting the largest eigenvalue to zero results in dominance of the 2nd eigenvector, corresponding to sparse and balanced bipartitions. This analysis seems like a complicated way to come to the same conclusion. We respectfully disagree with the reviewer's perspective. There is a large body of literature in the GNN community where people care about precisely quantifying the classification results and non-asymptotic behavior of these models. For example, \[ref line 443, Keriven: Not Too Little, Not Too Much\] studied classification guarantees of graph convolution in simplified statistical models and showed improved classification guarantees compared to no convolutions. However, the analysis was limited to at most $2$ convolutions. In \[ref line 494, Wu et al.\] the authors studied the non-asymptotic behavior of graph convolutions in the CSBM and precisely characterized how many convolutions it takes before the oversmoothing effect overtakes the aggregating effect of graph convolution. However, the paper does not give classification guarantees of the model. In our paper, we analyze the corrected graph convolution in the CSBM, which, in this model, mitigates the oversmoothing effect. As a result, we can give partial and exact recovery guarantees for up to $O(\log{n})$ convolutions, which has not been done in all the aforementioned works. We also precisely quantify the number of convolutions it takes to obtain our recovery results. We also note that the proof of our exact recovery result (Theorem 4.2) requires much more sophisticated techniques than simply analyzing the rate of convergence of the eigenvalues to $\lambda_1$. This is because we have to bound the distance $\|A^kx - s\|_\infty$ instead of $\|A^kx - s\|_2$, which means simple spectral analysis is insufficient. To overcome this, we used careful analysis of the moment of each entry of our convolved feature vector. Finally, we argue that the intuition behind our analysis being simple is a good thing because it makes it more accessible to the general audience. Our analysis formalizes this intuition that most people can interpret into concrete recovery guarantees whereas previous studies did not. > The practical usage of removing the top eigenvector seems very limited, as one dominating signal is exchanged for another. This is confirmed in Fig 3. The eigenvector which we remove has no information about the class membership. The second eigenvector which dominates after removing the first has useful classification information. Therefore, simply stating that exchanging one signal for another is not useful, is not correct. We prove this in our Theorems 4.1 and 4.2. Fig 3 does not confirm the reviewer's statement. In fact, Fig 3 shows that removing the top eigenvector allows the model to have better performance as the number of convolutions increases. As we point out in lines 348 to 350, the idea of removing the top eigenvector is helpful in practice and has been implemented in widely cited studies (see \[ref line 512, Zhao and Akoglu\]). > What are the benefits of this analysis against existing insights that the signal corresponding to the top eigenvector of a matrix gets amplified the most? We have answered this question in detail in our reply about Weakness 4 above. > What are potential practical implications? This is similar to weakness 5 so see the response there. To re-iterate, based on our analysis, we could recommend removing the top eigenvector from the convolution matrix. As we point out in lines 348 to 350, this is helpful in practice and has been implemented in widely cited studies (see \[ref line 512, Zhao and Akoglu\]). > l. 332: Why is the O-notation used? We will update the precise value of $p$ in the experiments. We used $O$ notation to denote that there is a constant factor associated with the value. If the reviewer has more actionable suggestions to further improve our work, please kindly let us know. We would be happy to address any further questions. We hope the clarifications are sufficient for them to consider raising their score. --- Rebuttal 2: Comment: I thank the reviewers for their detailed feedback. I can now better understand why this work can be interesting from a mathematical and statistical point. I would like to hear more details about the following: * In the referenced work [ref line 443, Keriven: Not Too Little, Not Too Much], the author shows that some rounds of aggregation can be helpful while too many iterations are hurtful for their considered task. Here, it seems like more rounds of aggregation are always beneficial, but fewer rounds can be sufficient. Can this study be extended in the future to include cases where the task does not align so well with the dominating eigenvector? For Figure 3, performance seems to be monotonously decreasing, so (zero or one?) iterations seem to be optimal. What would tasks look like for which some iterations of your corrected convolutions are beneficial while the performance in the limit becomes worse? Two additional comments on terms that I found irritating but not critical: * Convolution: There is no definition of what convolution means. It seems like it's just the aggregation part $\mathbf{A}$ and powers of it. Referring to this as convolution is confusing to me, as no filter is considered. In the referenced work above, the author calls this $k$ rounds of aggregation, which would also make the statements in this work clearer. * Corrected convolution: Was this term used previously in the literature? What makes this aggregation more "correct"? I agree that it seems more correct for the tasks considered in this work. However, I would argue that depending on the graph and task, various aggregation matrices can be more "correct". --- Rebuttal Comment 2.1: Comment: The above comment was apparently not visible to the authors. I apologize and hope my question still comes in time. --- Rebuttal 3: Comment: We thank the reviewer for their reply. However, we would like to point out that this post was made very close to the deadline and during the authors' nighttime. A proper reply to the reviewer takes several hours. We provide a reply to the best of our ability given the limited time. We want to re-iterate the fact that the corrected convolution is not ours. We mention this in our first reply above and we point this out in lines 348 to 350. This idea is already implemented in widely cited practical papers [ref line 512 (ICLR 2020)]. We provide a rigorous analysis. > In the referenced work [ref line 443, Keriven: Not Too Little, Not Too Much], ... task. The result that the reviewer claims seems to be Theorem 2. We would like to clarify what is proved in the main theorem of this paper. The authors don't provide a non-asymptotic analysis as we do in our paper. They only provide an existential result. They prove that 1 convolution can be better, under specific assumptions, than 0 convolutions. They also prove that 1 convolution is better than infinite convolutions. This implies that there might exist a $k^* \ge 1$ which is optimal. However, they never rigorously analyze the performance of convolution for $k>1$ and $k<\infty$. In fact, the authors mention in their paper that "In the next section, we also derive an intuitive expression for $R(k)$ (although without rigorous proof), which we observe to match the numerics quite well." We would like to point out that it is technically extremely challenging, and requires a whole different set of methodology, to achieve non-asymptotic analysis as we do in our paper, and we hope the review appreciates it. > Can this study be extended ... in the limit becomes worse? We will reply to this question by focusing on other tasks within node-classification, since node-classification is the focus of our paper. By dominating eigenvector we assume that the reviewer implies the second or higher eigenvectors. The dominant (first) eigenvector does not hold meaningful classification information, therefore removing it helps. We prove this in our Theorems 4.1 and 4.2. Also, Fig 3 shows that removing the top eigenvector allows the model to have better performance as the number of convolutions increases. We interpret the reviewer's comment as a question about why the performance decreases for $k$ larger than 1 or 2. The reason is not because we remove the dominant (first) eigenvector. As we point out this is beneficial, since the first eigenvector holds no information about the classes. The reason is that in the particular real data which we used there are eigenvectors with small eigenvalues which might contain meaningful information. Asymptotically, the corrected convolution, which we repeat is not our idea, see comments above, might miss the information from such eigenvectors. Intuitively, this could be the case in a graph where there are small communities within larger communities in the given graph structure. The top eigenvectors are likely to be correlated with the larger communities. To capture the information from small eigenvalues, a different convolution is needed which filters eigenvectors in an appropriate way. However, we would like to point out again, that the first eigenvector will be filtered out in this case too since it's not meaningful. To analyze such a scenario, we would have to change our random data model. That's because in the current data model the first few top eigenvectors (excluding the first one) are the most meaningful ones, and eigenvectors for small eigenvalues are just noise. Technically, a non-asymptotic spectral analysis could also work in this hypothetical new data model. The corrected convolution in our paper is likely to lose performance as $k$ increases. A different convolution would be needed to better capture the information of eigenvectors of small eigenvalues. This study is indeed interesting, however, we would like to mention that it is far from the goal of our present paper. We would be happy to mention the above in our paper. > Convolution: ... in this work clearer. We will make sure to clarify this in our paper. We come from a graph neural network background where this definition is ubiquitous and we didn't explicitly specify it as a definition. > Corrected convolution: ... can be more "correct". We chose the word corrected since in the literature of statistical community detection (no node features), the word is often used for modified versions of the original SBM, for example: B. Karrer and M. E. J. Newman, Stochastic blockmodels and community structure in networks, Phys. Rev. E 83 (2011). In this case, the fact that one does degree "correction" does not necessarily mean that it is a universal correction for any possible problem. It's a name which makes sense within a specific context. We can offer to change the title to "Analysis of Eigen-Corrected Graph Convolutions" if the reviewer believes that this is more specific. --- Rebuttal 4: Comment: I want to thank the reviewers for their quick and detailed answers. I am now convinced of this work's benefits to the graph learning community and am open to accepting this paper. I have changed my score to 6. I want to state my final opinions below, to which the authors do not need to reply: >The dominant (first) eigenvector does not hold meaningful classification information, therefore removing it helps. [...] Fig 3 shows that removing the top eigenvector allows the model to have better performance as the number of convolutions increases. The best-achieved performance of the original aggregation and corrected aggregation are quite similar (both for k=1). Stating that the corrected aggregation retains more task-related information would better describe the behavior. Identifying non-SBM tasks for which "removing the top eigenvector allows the model to have better performance as the number of convolutions increases" is actually true, and the max performance for the corrected aggregation is better compared to the standard aggregation, which is still open and interesting for future work. > We come from a graph neural network background where this definition is ubiquitous and we didn't explicitly specify it as a definition. In my graph neural network background, a graph convolution consists of a feature transformation and an aggregation part. As this work considers the aggregation part, I would personally find it more precise to call it $k$ rounds of aggregation instead of $k$ rounds of convolution. >We can offer to change the title to "Analysis of Eigen-Corrected Graph Convolutions" if the reviewer believes that this is more specific. I don't mind if the authors keep the original title. To me personally, something like "Analysis of CSBM-Corrected Graph Aggregations" would seem clearer. --- Rebuttal Comment 4.1: Comment: We are grateful. Thank you. We will make appropriate modifications to address your concerns in the revised version.
Summary: This paper studies the concept of oversmoothing via a CSBM modeling of a GNN structure, and views the behaviors of vectors after repeated multiplications of a graph. Importantly, they consider a scheme where a dominant eigenvector is "left out", so that it does not dominate the behavior of the evolution so much, and show enhanced performance over real world datasets. Strengths: This paper explores an important problem from a simple enough framework that it is tractable. There are a ton of statistical results, which all look reasonable and tell a reasonable story about the balance between learning and oversmoothing. The idea of discarding the dominant eigenvector is also interesting, and clearly does show improved results in the numerics. Weaknesses: It's not clear to me how informative / predictive the various statistical results are to oversmoothing in practice. Can there be some figures that show the theoretical bounds, compared with true performance over some simulated examples? For example, one of the theorems uses Davis/Kahan, which is pretty pessimistic in practice. In general, perhaps the spectral gap alone is not informative enough to showcase the entirety of the oversmoothing behavior, and while that is certainly fine given the complexity of the problem, it still would be useful to see the theorems compared with in practice behavior. Technical Quality: 3 Clarity: 3 Questions for Authors: What is CSBM? I know SBM but what makes it C? Have you considered scenarios where the data has clear block structure, but it does not 100% overlap with the true labels? will it cause undesirable biases? can those be quantified? Actually, one interpretation of this is that the adjacency spectrum somewhat mirrors the Laplacian spectrum, but in reverse order. So, the dominant eigenvector corresponds to the nullspace of L, and the second one is the one associated with the mixing parameter in L. So, in that sense, it makes sense why focusing on that eigenspace gives better performance. I would be interested in hearing the authors' view on this. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: no ethical issues Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful questions and comments along with the encouraging review. We answer the questions below. > Comment/question 1: It's not clear to me how informative / predictive the various statistical results are to oversmoothing in practice. Can there be some figures that show the theoretical bounds, compared with true performance over some simulated examples? For example, one of the theorems uses Davis/Kahan, which is pretty pessimistic in practice. In general, perhaps the spectral gap alone is not informative enough to showcase the entirety of the oversmoothing behavior, and while that is certainly fine given the complexity of the problem, it still would be useful to see the theorems compared with in practice behavior. We assume that the reviewer would like to see the theoretical classification accuracy bound for Theorems 4.1 and 4.2 in the same plot of our simulated synthetic experiments. Please, let us know if this is not the correct interpretation of your question. We would be happy to update our reply during the discussion period. In case, that's the correct interpretation of your question we provide updated plots with the theoretical bound for partial recovery, Theorem 4.1, in the response pdf. We would like to note that the exact classification thresholds from Theorem 4.2 are demonstrated with vertical lines in the current plots in Figures 1 and 2 of our paper. > Question 2: What is CSBM? I know SBM but what makes it C? Context is referred to in the literature as the features of the nodes. \textcolor{red}{Are there any historical reasons that the features are called context here? Cite the original CSBM paper and point to Section 3.1} > Question 3: Have you considered scenarios where the data has clear block structure, but it does not 100\% overlap with the true labels? will it cause undesirable biases? can those be quantified? This is a very good question. We have thought about this a little bit. If there is a mismatch between the community structure in the graph and classes of the feature vectors of the nodes, then the mismatched vertices will probably be pulled to the opposite means by the convolutions. This means that there will be more misclassified nodes in the partial classification result, and the threshold of exact classification will be worse. We can probably still analyze this case by splitting the feature vector into the part corresponding to correctly matched vertices and the part corresponding to mismatched vertices and analyzing the error from each part separately. If the mismatch is small, we should be able to get reasonable partial classification results. This could be an interesting avenue for future work. > Question 4: Actually, one interpretation of this is that the adjacency spectrum somewhat mirrors the Laplacian spectrum, but in reverse order. So, the dominant eigenvector corresponds to the null space of L, and the second one is the one associated with the mixing parameter in L. So, in that sense, it makes sense why focusing on that eigenspace gives better performance. I would be interested in hearing the authors' views on this. We agree with this interpretation, and that's exactly what happens in our analysis for CSBM. We will mention the relation to the Laplacian more explicitly in our revision. Final remarks Once again, we would like to express our gratitude for the thorough examination and feedback of our work. We believe that it certainly helped us to improve our manuscript. If the reviewer has more actionable suggestions to further improve our work, please kindly let us know. We would be happy to address any further questions. We hope the clarifications and additional comments are sufficient for them to consider raising their score. --- Rebuttal Comment 1.1: Title: Correction of typo in the response Comment: Dear reviewer, we apologize for the typo in the response to Question 2: What makes it C in CSBM? The C (context) here refers to the node features, since the data model consists of node features in addition to the graph. This term has been used in the statistics community to describe such data with a combination of two components: the graph and the node attributes. Please refer to Section 3.1 for a detailed description of the CSBM, and to [Deshpande, Y. and Sen S. and Montanari, A. and Mossel, E. Contextual stochastic block models. Advances in Neural Information Processing Systems, 2018.] where it was introduced. --- Rebuttal Comment 1.2: Comment: Thanks, I am happy with all the responses. The new figures showing the bound vs performance are also very interesting, and a great improvement on the paper. I have no further questions. --- Reply to Comment 1.2.1: Comment: Thank you! We will incorporate the modifications in the revised version of the paper.
Summary: This paper studies over-smoothing from $k$ rounds of graph convolutions in the Contextual Stochastic Block Model by considering vanilla graph convolutions and a corrected one where the principal eigenvector is removed. Using spectral analysis, the authors derive the partial and exact recovery for both cases and show that the corrected convolution avoids over-smoothing. They derive the classification error for different densities $p$, $q$ and separability $\gamma$ of the underlying graph model. Strengths: 1. The theoretical analysis is thorough and comprehensive. The studied setting is also of practical significance and the authors provide rigorous theoretical analysis. 2. The presentation is clear. The discussions on the theoretical results and assumptions are very helpful in understanding the paper. 3. Experiments also support the theoretical results. Weaknesses: There are no critical weaknesses in the work. I have a few comments/questions which I list below. 1. One aspect of the result is the dependence on the density of the graph as well as good SNR ratio. Can the authors comment on the need for denser connectivity in the graph for recovery? 2. The performance of the corrected convolution decreases with $k$ in the case of a multi-class setting. The authors reason it as the projection onto the second eigenvector cannot capture the full class information. However, in the balanced data setting, all the eigenvalues of the corrected convolution are the same, with eigenvectors having information about the classes. So, in expectation, shouldn't the performance be unaffected? 3. The analysis considers homophily structure of the graph $p>q$. Can it be extended to heterophily case as well? 4. In the definition of $\hat{A}$ in equation 1, shouldn't the last $D$ be $D^{-1/2}$? The negative sign is missing. Technical Quality: 4 Clarity: 3 Questions for Authors: Please refer to the Weaknesses. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The analysis is in linear setting, and extending it to non-linear activations may be challenging. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thought-provoking questions and comments, and are grateful for the encouraging review. We answer the questions below. > Question 1: One aspect of the result is the dependence on the density of the graph as well as good SNR ratio. Can the authors comment on the need for denser connectivity in the graph for recovery? In general, the lower bound on density ($p$ value) is required for degree and matrix concentration (see Proposition 5.3). In case the reviewer is asking why Theorem 4.2 requires a higher density than Theorem 4.1, this is just an artifact of the more technical analysis for Theorem 4.2. We believe with better analysis, we can probably remove the extra $\log n$ requirement in Theorem 4.2. Specifically, the proof of Theorem 4.2 requires some combinatorial counting arguments to bound the moment of the entry-wise error in the convolved feature vector. In counting these combinatorial objects, we sometimes applied some crude upper bounds which are likely not tight. > Question 2: The performance of the corrected convolution decreases with $k$ in the case of a multi-class setting. The authors reason it as the projection onto the second eigenvector cannot capture the full class information. However, in the balanced data setting, all the eigenvalues of the corrected convolution are the same, with eigenvectors having information about the classes. So, in expectation, shouldn't the performance be unaffected? That is a good observation. Indeed if the convolution matrix's top eigenvalue has multiplicity $L$, then after many convolutions, we are projecting onto the top $L$-eigenspaces. In response to these insights, we have produced new analysis on multi-class CSBM (see global response), in which we generalize Theorem 4.1 (partial recovery result) to the multi-class setting with balanced classes. If there are $L$ classes, the second eigenvalue of the expected adjacency matrix has multiplicity $L-1$ just as the reviewer pointed out. We show in our analysis that if the perturbation is small and $k$ is not too large, the corrected convolution will still behave like projecting onto the second to $L^{th}$ eigenspace of the expected adjacency matrix. In the case of real data, it is still likely the case that projecting the features onto the top few eigenspaces is not sufficient to capture all the information of the class labels and could also destroy some relevant information within the feature distribution. However, we still demonstrate that simply removing the top eigenvector leads to improved performance in the oversmoothing regime. We thank the reviewer for this observation and will modify our write-up accordingly to include this discussion. > Question 3: The analysis considers homophily structure of the graph $p>q$. Can it be extended to heterophily case as well? Our results hold for the case where $q>p$ as well. See lines $159$ to $161$ in our paper. > Question 4: In the definition of $\hat{A}$ in equation 1, shouldn't the last $D$ be $D^{-1/2}$? The negative sign is missing. Thanks for catching the typo. The second term should be normalized by $\mathbf{1}^\top D\mathbf{1}$. Here, $D^{1/2}\mathbf{1}$ is the top eigenvector of the normalized adjacency matrix. > Limitation: The analysis is in linear setting, and extending it to non-linear activations may be challenging. In our analysis on binary classification, the linear classifier was sufficient. However, for multi-class data, we could require non-linear classifiers. We have extended our partial recovery result (Theorem 4.1) to the multi-class setting which involve the use of a non-linear classifier (see global response). In our analysis, we have $L$ classes, and we assume features are generated by a Gaussian mixture model with one mean for each class. In our results, we show that the graph signal is measured by the quantity $\lambda = \frac{(p-q)n}{dL}$ where $d$ is the expected degree. The graph noise is bounded by $\delta = O(\frac{1}{d}(\sqrt{np(1-p)/L} + \sqrt{nq(1-q)}))$. We show that as long as $|\lambda| \gtrsim k\delta$, and the cluster means are well separated, then after $k$ rounds of convolution of the data, $1-o(1)$ fraction of points will be closer to the mean of their class than of any other class. Given this guarantee, we can correctly classify $1-o(1)$ nodes using the non-linear classifier: $x\mapsto \text{softmax}(\|x-c_l\|^{2})_{l=1}^L$, where $c_l's$ are cluster means from each class. This is a quadratic classifier, where the means $c_l$ are the learnable parameters. We note that in our model, we assumed there is only one mean for each class. It is possible to add more complexity to the model by having multiple means in each class and considering different ways they can be distributed. See, for example, the reference in line 385 where the authors analyze an XOR based data model for binary classification, requiring a non-linear classifier. We hope to extend our analysis to capture these more general models and classifiers as well. Final remarks Once again, we would like to express our gratitude for the thorough examination and feedback of our work. We believe that it certainly helped us to improve our manuscript. If the reviewer has more actionable suggestions to further improve our work, please kindly let us know. We would be happy to address any further questions. We hope the clarifications and additional comments are sufficient for them to consider raising their score. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications and for providing analysis for multi-class setting. I retain my score as I didn't have any major concerns in my initial review, and I recommend acceptance of the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for recommending acceptance of the paper. We will address all comments in the revised version of the paper.
Summary: In this paper, the authors present a comprehensive theoretical analysis using the contextual stochastic block model (CSBM) to evaluate the performance of vanilla graph convolution after removing the principal eigenvector to prevent over-smoothing. They conduct a spectral analysis for k rounds of corrected graph convolutions and provide findings for both partial and exact classification. Strengths: 1. The paper provides both detailed theoretical analysis and experiments on three datasets: CORA, CiteSeer, and Pubmed. 2. The paper provides a novel insight on why corrected convolution can mitigate over-smoothing and improve classification accuracy. Weaknesses: 1. The synthesized data by CSBM might be adequate to illustrate the binary classification case, but the multi-class case could be more complicated. For instance, it is illustrated in [1] that the effects of signed propagation under binary-class case and multi-class case could be quite different. It is mentioned that the authors would like to analyze the multi-class CSBM using more sophisticated architectures and I look forward to the further analysis. 2. Instead of stacking more than 20 layers of GNNs, it is a common practice to limit the number of layers to 3 to 5. According to Figure 3, the accuracy of the three learning methods appears more complex when the number of layers is limited to 5 or fewer. Therefore, the conclusion may not be applicable to real-world data applications. [1] Choi, Yoonhyuk, et al. "Improving Signed Propagation for Graph Neural Networks." arXiv preprint arXiv:2301.08918 (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. (Refer to weakness # 2) Would it be possible to analyze GNNs employing various convolution methods with shallower layers? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have addressed some of the limitations of their work (limited to the binary classification case). Their work does not present any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the thorough feedback on our work. It certainly helped us improve our manuscript. > The multi-class case could be more complicated. It is mentioned that the authors would like to analyze the multi-class CSBM using more sophisticated architectures and I look forward to further analysis. We have extended our partial recovery result (Theorem 4.1) to the multi-class setting. Statement and proof of our result are in the global response. In our analysis, we have $L$ classes, and features are generated by a Gaussian mixture model with one mean for each class. We show that the graph signal is measured by the quantity $\lambda = \frac{(p-q)n}{dL}$ where $d$ is the expected degree. The graph noise is bounded by $\delta = O(\frac{1}{d}(\sqrt{np(1-p)/L} + \sqrt{nq(1-q)}))$. We assume that $|\lambda| \gtrsim k\delta$, and the cluster means are well separated. We show that after $k$ rounds of convolution of the data for sufficiently large $k$, $1-o(1)$ fraction of points will be closer to the means of their class than that of any other class. This means a classifier exists that correctly classifies $1-o(1)$ fraction of the data. Our bounds are similar to Theorem 4.1 in our paper. We refer the reveiwer to table 1 in the response pdf for detailed discussion of our theoretical guarantee in different regimes of parameters. We also provide experimental results on synthetic data for the multi-class CSBM that mirror our theoretical bounds (see the response pdf). Finally, we note that for multi-class CSBM, there is a wide range of assumptions one can make about the class sizes, edge probabilities, and distribution of features. One key assumption we made is that each class has one feature mean. Other models (ref line 385) have analyzed instances where one class can have multiple means. We believe we can extend our analysis to even broader multi-class settings in the future with additional techniques. > It is illustrated in [1] that the effects of signed propagation under the binary-class case and the multi-class case could be quite different. This is a good comment. We will discuss this in our revision along with the reference you provided. Indeed, the heterophilic setting in the multi-class problem can be more difficult than the homophilic setting. Using our new theorem above, we formalize this observation below using CSBM. Just as in the case of 2-classes, there is no difference in our analysis between the homophilic $(p>q)$ and heterophilic $(q > p)$ cases. Note that in the homophilic case, $\lambda>0$ and in the heterophilic case, $\lambda<0$. Implicitly, we are applying signed propagation in the heterophilic in Theorem 1 because of our scaling factor of $1/\lambda^k$. Thus, one can view it as applying the convolution $\tilde{A}/\lambda$ each time. We show that as long as $|\lambda| \gtrsim k\delta$, the data will become well-separated after $k$ convolutions. This is because $\tilde{A}^k$ will tend towards its dominant eigenspaces, which correspond to the eigenvalues that are large in absolute value. However, the value of $\delta$ itself can be much worse in the heterophilic case. In Lemma 2, our upper bound on the noise from the graph, $\delta$, scales with $\sqrt{np/L} + \sqrt{nq}$ when $p,q < 1/2$. Assuming we fix $\min(p,q)$ to be small, if $p >> q$, this quantity is much smaller than when $q >> p$. This suggests that while the signal strength from the graph is the same, the noise can become much worse if we have many classes in the heterophilic case. > Comment 3 and Question: It is a common practice to limit the number of layers to 3 to 5. According to Figure 3, the accuracy of the three learning methods appears more complex when the number of layers is limited to 5 or fewer. Therefore, the conclusion may not apply to real-world data applications. Would it be possible to analyze GNNs employing various convolution methods with shallower layers? We agree with the reviewer that for shallow networks, there might not be a performance difference between the three convolutions tested in our real experiments. Since the reviewer seems to ask for a theoretical comparison among the convolutions in our paper, we can compare our exact classification result (Theorem 4.2) with previous work in the same model but with uncorrected convolutions. We observe that even in the case of one convolution, under appropriate assumptions in CSBM, the uncorrected convolution achieves the same performance as the corrected convolution, thus verifying the reviewer's claim in synthetic data as well. In particular, in our Theorem 4.2, if $k = 1$ (one convolution) and $\frac{p-q}{p+q} = \Omega(1)$, then the exact classification threshold for the feature signal-to-noise ratio $\frac{\|\mu-\nu\|}{\sigma}$ is about $\sqrt{\frac{\log{n}}{np}}$. Theorem 1.2 of the reference in line 381 shows the same separability threshold for uncorrected convolutions in the same setting up to a factor of $\sqrt{\log{n}}$, which can be removed by more careful analysis (see ref line 385). In our paper, we quantify how many convolutions are necessary to attain optimal behavior in CSBM. In Section 4, we provide several examples of model parameter values for which only a constant number of convolutions is optimal. We provide more such examples in the multi-class model in our response pdf. Concerning a high number of convolutions, we note that oversmoothing is a well-studied problem in the GNN community, which we do not claim to have resolved. However, we believe our results offer new theoretical insights into the oversmoothing phenomenon that we hope could be useful in future studies constructing deeper GNNs that are more effective. Final remarks If the reviewer has more actionable suggestions to further improve our work, please kindly let us know. We would be happy to address any further questions. We hope the clarifications are sufficient for them to consider raising their score. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: The authors' response has adequately addressed my main concerns on the multi-class scenario and the analysis on shallower networks. I have raised my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for raising their score. We will add the additional result in the revised version of the paper and address all comments.
Rebuttal 1: Rebuttal: $$ \def\norm#1{{\|#1\|}} \def\E{{\mathbb{E}}} $$ ## Multi-class Analysis We define the L-Block CSBM with parameters $p,q,L,n,m$. We have $n$ nodes and $L$ classes, $\mathcal{C}_1,...\mathcal{C}_L$, of size $n/L$. For each node $i$, we have a feature vector $x_i\in \mathbb{R}^m$ with distribution $N(\mu_i, \sigma^2 I_m)$. For each class $\mathcal{C}_l$, $l\in [L]$, we let $c_l \in \mathbb{R}^m$ be the class mean, and for $i\in \mathcal{C}_l$, $\mu_i = c_l$. We define $$\tilde{A} = \frac{1}{d}A - \frac1n\mathbf{1}\mathbf{1}^\top,$$ where $d = \frac{np}{L} + \frac{nq(L-1)}{L}$ is the expected degree of each vertex. Note $d$ can be estimated accurately with high probability by degree concentration (see proposition 5.2). We introduce useful notation below. - **Graph Signal:** $\lambda := \frac{(p-q)n}{dL}$ is the strength of the signal from the graph. - **Graph Noise:** $\delta := C(\frac{1}{d}(\sqrt{np(1-p)/L} +\sqrt{nq(1-q)}))$ for some constant $C$. $\delta$ is an upper bound on the graph noise. - Let $U:=\E[X]$ be the matrix whose $i^{th}$ row is $\mu_i$. We will use $\norm{U}_F^2$ to denote its Frobenius norm. We also assume our features are centered on expectation so that $U^\top \mathbf{1} = 0$. This is not restrictive since it can be satisfied by applying a linear shift to the features. - Let $\Delta = \min_{i,j\in [n]}\norm{\mu_i-\mu_j}$ be the minimum distance between the centers. ### Theorem 1 Given the CSBM with parameters, $p,q,L,n,m$, suppose $\min(p,q) \geq \Omega(\frac{\log^{2}n}{n})$ and $|\lambda| > 4k\delta$. Let $X^{(k)} = \frac{1}{\lambda^k}A^{(k)}X$ be the feature matrix after $k$ rounds of convolutions with scalaing factor $1/\lambda^k$. Let $x^{(k)}_i$ be the $i^{th}$ row of the matrix $X^{(k)}$. Then with probability $1-n^{-\Omega(1)}$, at least $n - n_e$ nodes, $i$, satisfy $\norm{x^{(k)}_i - \mu_i}< \Delta/2$ where $$n_e = O\Big((k \delta/|\lambda|)^2\frac{\norm{U}_F^2}{\Delta^2} + (L + n(\delta/|\lambda|)^{2k})\frac{\sigma^2m\log{n}}{\Delta^2}\Big).$$ In particular, the quadratic classifer $x\mapsto \text{softmax}(\norm{x-c_l}^2)_{l=1}^L$ will correctly classify at least $n-n_e$ points. --- See the attached PDF for examples of our bound in specific setting as well as experimental results. The following lemma explains how the expressions for graph signal and noise are derived. ### Lemma 2 The convolution matrix $\tilde{A}$ can be decomposed as $\tilde{A} = M + R'$ where: - $M = \E[\tilde{A}]$ has rank $L-1$, with $L-1$ eigenvalues equalled to $\frac{(p-q)n}{dL} = \lambda$. Also, $MU = \lambda U$ - $R'$ is a random matrix such that with probability $\ge 1-n^{-\Omega(1)}$, $\norm{R'} \leq O(\frac{1}{d}(\sqrt{np(1-p)/L} + \sqrt{nq(1-q)})) = \delta$ --- The proof is standard, so we only give a sketch due to space limitations. We are happy to give the full proof in the discussion period. For item 1, we note $\E[A]$ is rank $L$ and has top eigenvector $\mathbf{1}$ with eigenvalue $d$. Its second eigenvalue is $\lambda$ with multiplicity $L-1$ and the eigenspace is characterized by the set vectors orthogonal to $\mathbf{1}$ and constant on each class. Since each class has one center and $U^\top \mathbf{1} = 0$, the columns of $U$ are in the second eigenspace. To obtain item 2, we decompose $A-E[A]$ into entries with probability $p$ and entries with probability $q$. Then we bound them separately using matrix concentration in theorem A.4 of the paper. We will also need to bound the operator norm distance between the $k^{th}$ convolution and $M^k$ ### Lemma 3 Suppose $|\lambda| > 4k\delta$. Then with high probability, we have $$\norm{\frac{1}{\lambda^k}(\tilde{A}^k - M^k)} \leq 2k\delta/|\lambda|.$$ #### Proof By Lemma 2, we have $$\tilde{A}^k = (M+R')^k = M^k + \sum_{l=1}^k\sum_{b\in {[k]\choose l}}\prod_{i=1}^k M^{1-b(i)}R'^{b(i)},$$ where the inner sum is over bit-strings $b$ of length $k$ with exactly $l$ $1$'s and $k-l$ $0$'s. Note that $\norm{M} = |\lambda|$ and $\norm{R'}\leq \delta$ with high probability. Using the fact that $\norm{AB}\leq \norm{A}\norm{B}$ and triangle inequality, we have $$\norm{\frac{1}{\lambda^k}((M+R')^k - M^k)} \leq \frac{1}{|\lambda|^k}\sum_{l=1}^k{k\choose l}\norm{M}^{k-l}\norm{R'}^{l} \leq \sum_{l=1}^k{k\choose l}(\frac{\delta}{|\lambda|})^{l} = (1+\frac{\delta}{|\lambda|})^{k}-1$$ Our assumption that $|\lambda| > 4\delta k$ implies the RHS is at most $2k\delta/|\lambda|$. --- Now we are ready to prove our main theorem. #### Proof of Theorem 1 We decompose our data as $X = U + G$, where $G$ is a Gaussian matrix with i.i.d $N(0, \sigma^2)$ entries. We will decompose our error into error from the graph and error from the feature noise. Recall that we take our scaling factor to be $1/\lambda^k$. We have $$\norm{X^{(k)}-U}_F^2 = \norm{\frac{1}{\lambda^k}\tilde{A}^k(U + G)-U}_F^2 \leq 2\norm{\frac{1}{\lambda^k}\tilde{A}^kU - U}_F^2 + 2\norm{\frac{1}{\lambda^k}\tilde{A}^k G}_F^2.$$ By Lemma 2, we have $MU = \lambda U$. Thus, we have $$\norm{\frac{1}{\lambda^k}\tilde{A}^kU - U}_F^2 = \norm{\frac{1}{\lambda^k}(\tilde{A}^k - M^k)U}_F^2 \leq \norm{\frac{1}{\lambda^k}(\tilde{A}^k - M^k)}^2\norm{U}_F^2 \leq (k\delta/|\lambda|)^2\norm{U}_F^2,$$ where the inequality follows from Lemma 3. By standard Gaussian norm concentration for $G$ (see theorem A.1 of paper), we have high probability $$\norm{\frac{1}{\lambda^k}\tilde{A}^k G}_F^2 \leq \frac{1}{\lambda^{2k}}Tr(\tilde{A}^{2k})\sigma^2m\log{n}$$ Since adding $R'$ to $M$ preturbs its eigenvalues by at most $\delta$ (see theorem A.1 in the paper), we have $$\frac{1}{\lambda^{2k}}Tr(\tilde{A}^{2k}) \leq (1 + \delta/|\lambda|)^{2k}(L-1) + n(\delta/|\lambda|)^{2k}$$ Note that $|\lambda| > 4\delta k$ implies $(1 + \delta/|\lambda|)^{2k} \leq 1$. Now let $n_e$ be the number of points $i$ such that $\norm{x^{(k)}_i - \mu_i} \geq \Delta/2$. Then we have $$n_e\Delta^2\leq 4\norm{X^{(k)}-U}_F^2 \leq O((k\delta/|\lambda|)^2\norm{U}_F^2 + (L + n(\delta/|\lambda|)^{2k})m\sigma^2\log{n}).$$ Pdf: /pdf/8ad5c4af4ae1c1454cc3aae0f8b73bcfed808adc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Disentangled Representations for Perceptual Point Cloud Quality Assessment via Mutual Information Minimization
Accept (poster)
Summary: This paper explores the method for No-Reference Point Cloud Quality Assessment. The key idea is to involve the disentangled representation learning to minimize mutual information between representations of point cloud content and distortion. The authors conduct experimental performance comparisons on three public databases and compare their proposed method with 15 existing models. The proposed method achieves optimal or suboptimal results in most of the metrics. The ablation study demonstrated the necessity of each design part. Strengths: The paper is clearly written and provides a detailed formulation of the approach. Weaknesses: The weak point of the paper is its presentation. Many terms are introduced without explaining them properly, e.g. "the tight upper bound" or "the masked autoencoding strategy”. Furthermore, the Fig 2 is not well-designed with content aware branch. The Fig2 splits the proposed architecture and the content-aware pretraining and masked autoencoding strategy, which does not help the reader to understand architecture with figure. Lastly, it would be highly beneficial to make the code publicly available to enhance collaborative efforts and facilitate the sharing of this work. Apart from this, there are quite a few typos and grammar mistakes that should be corrected, such as “the can be”, “masked auto-encoding / autoencoding Technical Quality: 2 Clarity: 2 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **`R4-Weakness 1`**: Thank you for your precious comments about presentation. We will provide more detailed explanations and make the paper easier to understand. The explanations of some terms are as follow: * The *tight upper bound* of mutual information (MI) means an upper boundary that is always higher the actual value of MI. A tight upper bound means the bound is close to the actual value of MI and equal to MI under certain conditions. * The *masked autoencoding (MAE)* strategy is a popular visual self-supervised learning scheme, which maskes some patches of the input image and training a model to reconstruct the masked patches. This method helps in learning robust representations of unlabeled images. * The *grid mini-patch sampling* means dividing an image into smaller, non-overlapping patches (mini-patches) arranged in a grid-like structure. * The *variational distribution* is a key concept in variational inference, a technique used in Bayesian statistics for approximating complex probability distributions. It is essentially a tractable distribution used to approximate the true posterior distribution of the model's latent variables **`R4-Weakness 2`**: Thank you for your constructive comments about the Fig 2, we have modified the Fig 2 following your suggestion. We split the architecture and the content-aware pretraining (i.e., the proposed masked autoencoding strategy) into two figures, and make some detailed modifications , as shown in `attached file Figure B` and `attached file Figure C`. This can make readers better understand the overall architecture and the pretraining strategy. **`R4-Weakness 3`**: Thank you for your comments about the code. The code is attached in the supplementary material (in the .zip file), and we will upload the code to Github to make the code publicly available. **`R4-Weakness 4`**: Thank you for comments about typos and grammar mistakes. We will carefully correct these detailed mistakes and ensure there will be no such mistakes in the final version. --- Rebuttal 2: Title: Would you please have a look at the author rebuttal? Comment: Dear Reviewer, Thanks a lot for contributing to NeurIPS2024. The authors have provided detailed responses to your review. Would you please have a look at them at your earliest convenience? Thanks again. AC
Summary: This paper proposes a novel no-reference quality assessment model tailored for point-cloud data. A disentangled representation learning strategy is leveraged to account for both content-aware information and distortion-aware information. Comprehensive experiments are conducted and the effectiveness of this proposed model is well verified. Strengths: The proposed framework is novel and the paper is easy to understand. Weaknesses: 1. The idea of framing the quality assessment into content-/semantic-aware and distortion-aware aspects is actually quite common in IQA/VQA on 2D visual data. It’d be better to review some related QA metrics for 2D data, and analyze the key difference on the implementation of this idea between point-cloud data and 2D data quality assessment. 2. Lack of comparison on computational complexity. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the Weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see the Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer #3 (hUdG) for the constructive comments. Our responses are as follows: **`R3-Weakness 1`**: Thank you for your insightful comments. We first review the related IQA/VQA papers and then analyze the key difference with our DisPA. The paper review will be added to our main paper. As for IQA, **CONTRIQUE** [1] learns distortion-related information on images with synthetic and realistic distortions based on contrastive learning. **Re-IQA** [2] trains two separate encoders to learn high-level content and low-level image quality features through an improved contrastive paradigm. **QPT** [3] also learns quality-aware representations through contrastive learning, where the patches from the same image are treated as positive samples, while the negative sample are categorized into content-wise and distortion-wise samples to contribute distinctly to the contrastive loss. **QPTv2** [4] is based on masked image modeling (MIM), which learns both quality-aware and aesthetics-aware representations through performing the MIM that considers degradation patterns. As for VQA, **CSPT** [5] learns useful feature representation by using distorted video samples not only to formulate content-aware distorted instance contrasting but also to constitute an extra self-supervision signal for the distortion prediction task. **DisCoVQA** [6] models both temporal distortions and content-related temporal quality attention via transformer-based architectures. **Ada-DQA** [7] considers video distribution diversity and employ diverse pretrained models to benefit quality representation. **DOVER** [8] divides and conquers aesthetic-related and technical-related (distortion-related) perspectives in videos, introduces inductive biases for each perspective, including specific inputs, regularization strategies, and pretraining. Our DisPA's key difference with these papers lies in the utilization of unique characteristics of point clouds. For example, to fully investigate the quality information of point clouds, we **randomly rotate point clouds before projection** . In addition, the **mini-patch map integrates the patches from multi-view images** to take advantage of the multi-view characteristic of point clouds. Furthermore, it is worthwhile to note that our paper is first to use mutual information (MI) to achieve representation disentanglement, which has not been explored in IQA/VQA. **`R3-Weakness 2`**: Thank you for your comments. As your request, we compare the computational complexity of NR-PCQA models as follows: | Method | Parameters (M) | Inference Time (s) | | ------------ | -------------- | ------------------ | | IT-PCQA | 0.61 | 2.87 | | ResSCNN | 1.23 | 1.92 | | PQA-Net | 0.22 | 4.86 | | MM-PCQA | 52.96 | 2.64 | | DisPA (ours) | 140.77 | 1.89 | From the above table, we can see that compared to the IT-PCQA, ResSCNN and PQA-Net whose architecture is specifically designed, our DisPA has large parameters because Vision Transformer (ViT) and Swin Transformer have been used, like MM-PCQA using ResNet-50. Instead of pretraining self-designed light-weight networks, we use popular encoders to make the pretrained encoders easier to be transferred to other tasks. However, although our DisPA has a larger model size, the inference time is the fastest, even including the rendering process, which demonstrates the efficiency of our DisPA. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. The authors have effectively addressed my concerns. I will raise my score to 6.
Summary: This paper proposes a novel disentangled representation learning framework called DisPA to decouple the representation learning process of point cloud content and distortion. To sufficiently disentangle these two representations, the DisPA uses two branches to learn them and adopt different training philosophies separately. For the content-aware branch, DisPA pretrains one encoder using a proposed masked auto-encoding strategy, which partially masks the images projected from distorted point clouds and reconstructs the corresponding patches of the images projected from pristine point clouds; For the distortion-aware branch, the DisPA integrates the mini patches of the rendered multi-view images into a mini-patch map, which can focus on local distortions and ignore the global point cloud content. Furthermore, to disentangle the learned representations, the DisPA uses a trainable mutual information estimator to estimate the mutual information (actually the tight upper bound) between these two branches and further minimize it alternatively along with the training of the main network (i.e., the two encoders and regression layers). Finally, the experimental results demonstrate the superior performance of DisPA in terms of both prediction accuracy and generalizability. Strengths: + This is the first paper exploring disentangled representation learning for PCQA. The disentanglement is reasonable and even necessary because the point cloud content and distortion are differently perceived by humans. + The proposed methodology is well-motivated. The masked auto-encoding strategy can intuitively make the encoder capture the point cloud content information. + The paper is written well. The motivation of why disentangled representation learning is essential for PCQA has been introduced clearly from the observation of human vision systems. + The mathematical derivations are detailed and easy to follow, including the approximation of MI in Section 3 and further proof in Appendix A. Weaknesses: 1. From my viewpoint, the DisPA can be totally used for IQA, or even more suitable, since this work does not process 3D native point cloud data but just projects point clouds into images and uses 2D networks. Have the authors tried to introduce unique attribute information that is closely related to point clouds? 2. The whole DisPA is based on projections of point clouds, so the number of viewpoints is very important to the quality score prediction. However, the authors did not discuss the impact of the number of viewpoints. 3. How much time does it take to pretrain the content-aware encoder? The authors did not discuss the pretraining details in the implementation details part. 4. Why does not the LS-PCQA follow the K-fold data splitting? The authors should explain this for the loss of a unified experiment setting. 5. What is the actual architecture of the MI estimator? And what is the relation between the MI estimator and the lightweight neural network Q_phi? The idea of alternative training of the MI estimator and the main network is easy to understand, but the components of the MI estimator need more clarification. Is it just simple MLPs? 6. The mini-patch map generation has been used in many papers [1,2,3]. However, considering this is a minor contribution and the mini-patch map is actually effective for learning disentangled representations, this limited contribution is acceptable. 7. A minor weakness: There are some small flaws (gray bounding box) with the presentation of Figure 5 when zooming in. Please ensure all figures can be presented clearly without error. [1] Wu, Haoning, et al. "Exploring video quality assessment on user generated contents from aesthetic and technical perspectives." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Wu, Haoning, et al. "Fast-vqa: Efficient end-to-end video quality assessment with fragment sampling." European conference on computer vision. Cham: Springer Nature Switzerland, 2022. [3] Zhang, Zicheng, et al. "Gms-3dqa: Projection-based grid mini-patch sampling for 3d model quality assessment." ACM Transactions on Multimedia Computing, Communications and Applications 20.6 (2024): 1-19. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why the conditional distribution p(y|x) is unavailable in your case? This needs more explanations, because the mathematical analysis is somewhat abstract so more details are needed to make it more understandable. 2. What is the advantanges of the differential ranking loss function? 3. Can this work be trained end-to-end for IQA or video quality assessment (VQA) without modification of specific modules (just replacing the input/output)? 4. Can the masked auto-encoding strategy use natural images for pretraining? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have adequately addressed the limitations and discussed the potential negative societal impact in the appendix D Limitations and Future Work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer #2 (oeD7) for the insightful comments. Our responses are as follows: **`R2-Weakness 1`**: Thanks for your constructive comment. As you said, the DisPA can be used for IQA without changing the architecture and training pipeline. We use projected images instead of 3D native point cloud for the following reasons: **(1) Large scale pretraining.** In the scenario of PCQA, we focus on dense point clouds with huge data volume. Rendering point clouds into multi-view images can effectively reduce the data volume to facilitate large scale content-aware pretraining (i.e., the masked autoencoding strategy) with large batch size . **(2) Pixel-to-pixel correspondence.** After projection, the pixel-to-pixel correspondence can be established between the projected images of distorted and reference point clouds, which facilitates the computation of reconstruction loss for the content-aware pretraining. **(3) Generation of mini-patch map.** Projecting point clouds into images can enable the generation of mini-patch map for the distortion-aware branch to enhance the presentation of low-level distortion patterns. Furthermore, we have introduced some unique characteristics of point clouds to achieve better performance. For example, we **randomly rotate point clouds before projection** to fully investigate the quality information of point clouds. In addition, the **mini-patch map integrates the patches from multi-view images** to take advantage of the multi-view characteristic of point clouds. **`R2-Weakness 2`**: Thanks for your insightful comment and we have tested the performance of DisPA trained on SJTU-PCQA and WPC with different numbers of viewpoints (i.e., 2, 4, 6 and 12 views). The configuration of mini-patch map generation is accordingly modified. The viewpoints are all evenly distributed in the 3D space. | Performance | SJTU-PCQA | | | | WPC | | | | | :---------- | :-------- | :------ | :------ | :------- | ------- | ------- | ------- | -------- | | | 2 views | 4 views | 6 views | 12 views | 2 views | 4 views | 6 views | 12 views | | SROCC | 0.657 | 0.886 | 0.908 | 0.910 | 0.554 | 0.705 | 0.788 | 0.786 | | PLCC | 0.636 | 0.893 | 0.919 | 0.919 | 0.577 | 0.711 | 0.790 | 0.793 | From the above table we can see that with the number of viewpoints increasing, the model performance first increases and then keep stabilized. More specifically, the performance with 12 viewpoints is slightly better than 6 viewpoints, and the performances of 2 and 4 viewpoints are relatively unsatisfactory. Considering the efficiency, we select 6 viewpoints in our paper. We will add these results in the revised paper. **`R2-Weakness 3`**: Thank you for your question. It takes about 20 hours for the content-aware pretraining on a single NVIDIA 3090 GPU because of the large dataset scale of LS-PCQA and the random rotations before projection during pretraining. We will add these details in our implementation part. **`R2-Weakness 4`**: We select K-fold because of the small dataset scale of SJTU-PCQA and WPC. However, LS-PCQA has **a large scale** so we select the train-val-test splitting. To further validate the stable performance on LS-PCQA, we test the the performance of several NR-PCQA models on LS-PCQA following unified 5-fold cross validation: | Performance | PQA-Net | IT-PCQA | GPA-Net | ResSCNN | MM-PCQA | DisPA(ours) | | ----------- | ------- | ------- | ------- | ------- | ------- | ----------- | | SROCC | 0.583 | 0.337 | 0.587 | 0.593 | 0.587 | 0.623 | | PLCC | 0.590 | 0.348 | 0.606 | 0.625 | 0.603 | 0.635 | | RMSE | 0.199 | 0.226 | 0.192 | 0.170 | 0.191 | 0.161 | From the above table we can find the performances of NR-PCQA are relatively stable on LS-PCQA (compared with Table 1 of our main paper). **`R2-Weakness 5`**: As you said, the MI estimator is based on MLPs, as well as the $Q_\phi$. The $Q_\phi$ infers the variational distribution and the MI estimator computes the MI following Equation (3) in the main paper. Therefore, the **lightweight MLP $Q_\phi$ is the key component of MI estimator**, and MI estimator integrates the outputs of $Q_\phi$ and computes the MI. **`R2-Weakness 6`**: Thanks for your insightful comment. Although grid mini-patch sampling has been employed in the papers you listed, this is our sub-contribution and quite useful in our scenario when learning distortion-aware representations. Furthermore, our mini-patch sampling is different because we utilize the mutli-view characteristic of point clouds and integrates multi-view patches into the mini-patch map. **`R2-Weakness 7`**: Thanks for your kind comments. This is due to the issues of pdf browsers, we will fix this problem of image presentation. **`R2-Question 1`**: Thank you for your insightful question. In our case, the conditional distribution $p(\mathbf y |\mathbf x)$ denotes the distribution of distortion-aware representation given the content-aware representation. This is unavailable because the distribution of distortion patterns is unknown, since there is no prior knowledge of distortion types and intensities for DisPA during inference. Furthermore, there are cases of unseen distortion types for cross-dataset evaluations. **`R2-Question 2`**: The differential ranking loss can better assist the model to distinguish the quality difference when the point clouds have close quality labels. **`R2-Question 3`**: The DisPA can be trained end-to-end for IQA and VQA. However, the module of mini-patch map generation should be accordingly modified. For example, the IQA can be supposed as a special case of PCQA with only one viewpoint. **`R2-Question 4`**: The masked-encoding can definitely used on natural images. Acutually, the content-aware encoder is initialized with parameters optimized on ImageNet-1K with natural images. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. The authors have effectively addressed my concerns. I will raise my score to 7.
Summary: This article's motivation is interesting. It combines content-aware and distortion-aware characteristics to train a 3D quality assessment network. Additionally, it employs a MAE-based method to train a content-aware encoder, uses patches to focus the network on learning distortion, and applies an MI module to integrate both features. Strengths: 1. The authors first analyze the shortcomings of existing networks, specifically that data imbalance leads to overfitting in current methods, resulting in poor processing of other content images with the same degradation. Therefore, the authors aim to use an MAE-based method to learn content-related features. 2. The proposed key MI-based regularization is effective. 3. The presented methods can obtain impressive results on multiple datasets. Weaknesses: 1. Regarding the masked part, there is a gap between the first and second stages because the input in the first stage is a partially masked image, while the input in the second stage is the complete image. 2. I am also a bit confused about why the constraint in the first stage is a clean/reference image. This would give the masked encoder the characteristic of restoration, while the core of quality assessment is to evaluate the quality of the image. If the image features are restored, will it affect the accuracy of the image quality assessment? 3. The motivation for the MI part should provide more details and explanations, which leaves me somewhat puzzled. 4. I am not quite sure what the distortion-aware encoder has learned. Is it truly related to degradation features? This might require some verification. Technical Quality: 3 Clarity: 2 Questions for Authors: This section corresponds to the Weaknesses: 1. How does the network address the gap between the input images in the first and second stages? 2. Why is a clean image used as a constraint? This results in learning the restoration characteristics, which is not very helpful for quality assessment. 3. The motivation for MI needs further explanation. 4. Visualize or analyze the distortion-aware features. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: No potential negative societal impact. Please address the questions/suggestions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer #1 (2ATd) for the insightful comments. Our responses are as follows: **`R1-Weakness 1`**: Thank you for your in-depth comments. The gap between the partially masked image and the complete image **is addressed by fine-tuning**, as well as the main training objective (see Equation 10 in our paper). Our content-aware pretraining is based on masked auto-encoding (MAE), where the **MAE is also conducted on partially masked images and fine-tuning is conducted on complete images.** The large-scale pretraining can enable the encoder to learn useful semantic representations, while the gap between masked and complete images can be addressed by adequately training the pretrained encoder with small-scale complete images. In our case, our MAE-based pretraining can make the encoder learn to recognize different point cloud contents (on LS-PCQA with a large scale of different point clouds), and the gap can be addressed fine-tuning the encoder on specific PCQA datasets. Furthermore, the pretraining can effectively enhance the prediction accuracy (see Table 3 in our paper) by a large margin, which indicates the gap has been sufficiently addressed after fine-tuning. **`R1-Weakness 2`**: Thanks for your comments. As you said, using a clean image as a constraint resembles image restoration. However, Since our goal is to learn disentangled representations, **we use a clean image as a constraint to force the content-aware encoder to ignore the distortions and capture point cloud content information from distorted projected images**. In contrast, if we use a distorted image as a constraint, the encoder tries to restore the images with distortion and will certainly learn joint features of distortion and content, which disobeys our goal of representation disentanglement. Furthermore, we use t-SNE visualization to verify that the learned content-aware representations are strongly correlated to point cloud content but not to distortions. The visualization is in the `attached file Figure A`. The embeddings of different point cloud contents are separated by a clear boundary. **`R1-Weakness 3`**: Sorry for the lack of detailed explanation. **The mutual information (MI) part is designed to explicitly disentangle the representations of the two branches**, since the MI directly describes the correlation between two variables. In terms of detailed motivation of MI estimation and minimization (Section 3 in our paper), we have several key motivations: * Instead of computing the intractable exact value of MI (denoted as $\mathcal I$), we compute its upper bound $\hat{\mathcal I}$ because **minimizing the upper bound can also minimize the actual MI** as long as $\hat{\mathcal I}$ is a tight upper bound (i.e., close to the exact value of MI). Proof of the tight upper bound is in appendix A. * Since **we have no prior knowledge of the conditional distribution** $p(y|x)$ (reasons in R2-Question 1), **we can only approximate it with a variational distribution** $\hat{\mathcal I_v}$. Specifically, we use an MLP $\mathcal Q_\phi$ to infer a distribution $q_\phi(y|x)$ that is close to $p(y|x)$, and the inferred distributions of samples in a mini-batch are summarized to predict $\hat{\mathcal{I}}_v$ (see Equation 3 in our paper). * As mentioned above, minimizing the upper bound can only work when the upper bound is tight. **To make $\hat{\mathcal I_v}$ close to the tight upper bound $\hat{\mathcal I}$, we minimize the KL divergency between $p(y|x)$ and $q_\phi(y|x)$**. After simplification, the KL divergency can formulated as the $\mathcal L_\text{MI}$​ (Equation 5 in our paper). * Finaly, for each epoch during the training, we first train the MLP $\mathcal Q_\phi$ (i.e., minimizing the KL divergency between$p(y|x)$ and $q_\phi(y|x)$). Then we use the trained $\mathcal Q_\phi$ to estimate the MI to regularize the encoders, as shown in Equation 10 in our paper, where the estimated $\hat I_v$ is used as a regularization term to achieve explicit disentanglement. **`R1-Weakness 4`**: To verify that the distortion-aware features are related to degradations, we conduct the t-SNE visualization of learned representations of the distortion-aware branch. The results are in `attached file Figure A`. We also present the visualization results of PQA-Net and GPA-Net for comparison, because these two methods both use distortion type prediction to learn distortion-aware representations. From `attached file Figure A` we can see that **the learned distortion-aware representation shows clear and separate clustering for different distortion types, indicating a strong correlation with degradations.** In addition, the content-aware representations present worse clustering for distortion types. In addition, many papers [1,2,3] have verified that the mini-patch map can force the network to learn features that are related to degradations. [1] Fast-vqa: Efficient end-to-end video quality assessment with fragment sampling [2] Exploring video quality assessment on user generated contents from aesthetic and technical perspectives [3] Gms-3dqa: Projection-based grid mini-patch sampling for 3d model quality assessment --- Rebuttal Comment 1.1: Title: Response to the authors. Comment: Thank you for your detailed responses. I apologize for missing the fine-tuning process in the first question. In fact, applying the fine-tuned operation for MAE is general, and I thought this step was omitted in the paper, which caused my confusion. Other doubts have also been resolved, and I will improve my score to weak accept.
Rebuttal 1: Rebuttal: Dear Reviewers and ACs: We sincerely appreciate all the reviews. They give insightful and high-quality comments on our paper. We would like to emphasize that our work proposes a novel and effective disentangled representation learning framework for point cloud quality assessment called DisPA. We appreciate your recognition of our efforts in attempting to disentangle the content-aware and distortion-aware representations via mutual information minimization. We have carefully read your comments and made responses to all the reviewers, where we have supplemented our work with additional experiments and visualization results to incorporate the insightful suggestions of the reviewers. **The related figures are attached in the pdf file.** Thank you all for the valuable suggestions. Thanks, Paper 6653 Authors. Pdf: /pdf/f7b1e06556d83a06c8ff871a58a06831486dbcb1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reinforcement Learning with Adaptive Regularization for Safe Control of Critical Systems
Accept (poster)
Summary: The authors develop RL with Adaptive Control Regularization (RL-ACR) to enable safe RL exploration. This solution uses two agents: a safety regularizer to enforce safety constraints, and an adaptive agent to perform exploration. They demonstrate their method on four critical applications against leading benchmarks. Strengths: 1. The use of two parallel agents (safety regularizer and adaptive) is novel. 2. The work is clearly presented with good explanations, algorithm procedures, and visuals. 3. The work studies real-world application problems. 4. The evaluations test the proposed RL-ACR method against a large number of baselines. 5. The evaluations substantiate the authors' claims of 1) improved performance, and 2) reduced failure rate. 6. The evaluations are accompanied by thorough analysis and discussion of the results by the authors. Weaknesses: 1. Theorem 1 requires the assumption that the RL policy converges to the optimal policy. However, I do not see a result guaranteeing that the RL-ACR policy converges to the optimal. Instead, I see a monotonic performance improvement result in Theorem 2, but this is not an optimality result. 2. The work is intended to address safety, yet the authors do not provide a theoretical guarantee of closed-loop stability. Such guarantees are important for control of dynamical systems (e.g., airplanes, cars). Technical Quality: 3 Clarity: 4 Questions for Authors: 1. If an optimality result is available, does it require that the estimated system model $\tilde{f}$ be exact to the actual physical process? I am thinking about this in light of your assumptions for Theorem 1. Presumably an imperfect model would void the convergence to the optimal policy and thereby the sufficient conditions required for Theorem 1. I do appreciate the empirical attention paid to model uncertainty in Section 4.2. 2. How do the compute/data requirements of RL-ACR compare to the baselines? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors adequately address limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Weakness 1]**: The reviewer asked about the convergence result of RL-ACR. **Response**: For state $s$, the regulation on RL policy converges to 0 ($\beta(s)$ converges to 0) with the assumption that RL learns a better policy than the regularizer. This is a reasonable assumption since the regularizer was derived from a suboptimal model. When the regularization converges to 0, the problem is equivalent to a standard RL problem, where the assumption of RL convergence is reasonable and widely accepted in research. We mentioned this in the proof in L487-492, but we thank the reviewer for the review and will clarify the assumption for Theorem 1. **[W2]**: The reviewer asked about the closed-loop stability of RL-ACR. **Response**: We fully understand the reviewer’s point but would like to add a couple of remarks that might help their assessment. We considered not knowing the exact model (a characteristic phenomenon in real-world application) and thus did not focus on finding a theoretical guarantee of closed-loop stability, which would be model-dependent, and the analysis would be application-specific. However, we believe that the proof on guaranteed policy return improvement (although addressing the tabular case) is a valuable theoretical result for RL-based methods such as ours. **[Question 1]**: The reviewer asked if an imperfect model prevents the convergence to the optimality result. **Answer**: The optimality proof does not depend on the perfect model. In fact, we assume that the model used for MPC is suboptimum. If the model is imperfect, stronger policies than MPC exist and can be explored by the RL policy. This causes $\beta(s)$ to converge to 0. If $\beta(s)$ converges to 0, there is no regularization term for the policy optimization, thus reducing the problem to a standard RL problem with proven convergence to the optimum policy. **[Q2]**: The reviewer asked about the compute/data requirements of RL-ACR compared to the baselines. **Answer**: the average per step solving time for RL-ACR is 0.0375 on one CPU, which is similar to MPC (widely adopted in applications). Model-free methods such as SAC and CPO are faster than RL-ACR, but they cause constraint violations during training. Considering the RL-ACR solve time is sufficient for real-time control, the computation requirement is reasonable to guarantee safety during training. In terms of data, RL-ACR and the baselines collect the same amount of environment samples. --- Rebuttal Comment 1.1: Title: My thanks to the authors. Overall, I remain a positive review of this work. However, I am troubled by assertions made in the rebuttal, which does not convincingly address the raised weeknesses. Decrementing my score. Comment: My thanks to the authors for their responses to my posed weaknesses and questions. The positive aspects of this paper I identified in the review remain unchanged. However, the authors failed to adequately address my riased weaknesses. In fact, their response, which borders on cavalier at times, raises concerns I did not have previoulsy. In light of these points, I am decrementing my score to a weak accept. Below can be found my responses to your specific points. I have focused on the remarks which troubled me: **[Weakness 1]:** > For state $s$ the regulation on RL policy converges to 0 ($\beta(s)$ converges to 0) with the assumption that RL learns a better policy than the regularizer. * I am confused by the point the authors are trying to make here. $\beta(s)$ converging to zero implies that the focus is choosing the RL module agent you are presenting to it -- it says nothing about the convergence of the RL module policy or its quality. * **"Better"** here is vaguely defined and does not imply **convergence** at all. * My weakness centers on the failure of the convergence of the **sequence of policies** (i.e., convergence with respect to the algorithm iteration of the training scheme) to the optimal policy, not on the focus network simply selecting between the safety regularizer and the RL module. > [...] with the assumption that RL learns a better policy than the regularizer. * This is not rigorous or substantial by any means. It's expected that your focus network should pick the RL module policy **if it's assumed to be a better policy.** I'm asking for **justification via provable results that the RL module converges and delivers improved performance.** > [...] the assumption of RL convergence is reasonable and widely accepted in research * This comment is very troubling. In other words, you mean to assert that every RL algorithm developed is universally accepted to converge? * So then, **simply calling your algorithm an RL algorithm automatically gives it established convergence performance?** * I don't see any other way to interpret this statement. **[Weakness 2]:** > a theoretical guarantee of closed-loop stability [...] would be model-dependent, and the analysis would be application-specific. * This is simply not true in control settings -- in fact, the vast bulk of standard stability results in the control literature are formulated for general systems rather than application-specific examples. > We considered not knowing the exact model (a characteristic phenomenon in real-world application) and thus did not focus on finding a theoretical guarantee of closed-loop stability, * This seems to suggest that model uncertainty and stability guarantees are mutually exclusive, which is also not the case in the literature at all. * In fact, there is a really important subtlety here: One of the core reasons for opting for RL frameworks is to address uncertainty in the environment. The authors seem to suggest that performance guarantees cannot be derived in the presence of model uncertainty -- exactly the core use-case of RL. --- Reply to Comment 1.1.1: Title: Our responses and actions regarding the weaknesses pointed out by the reviewer (as well as the reviewer's concerns) Comment: We thank the reviewer for the feedback. Our response was intended to render our agreement with the Weakness points identified by the reviewer whilst raising additional points that may aid the assessment of our work; we sincerely apologize if the response seemed dismissive in any form. Below, we provide clarifications for our responses (taking the reviewer’s concerns into account) as well as our action on the manuscript to address the received feedback. > **[Weakness 1]:** The reviewer correctly identified that: Theorem 1 requires the assumption that the RL policy converges to the optimal policy, and that, this policy converging to the optimal is not established anywhere in the manuscript. This assumption, however, is justified by an existing proof in [Haarnoja et al., 2018] and we will revise the manuscript appropriately to address this comment. More specifically, our method uses the SAC algorithm in its RL module, which has been proven to “converge to the optimum policy” before in [Haarnoja et al., 2018]. Our Theorem 1 takes this existing result (RL module convergence) as an assumption and provides more insight into the behavior of the combined policy. We will revise Theorem 1 in the manuscript, and in particular, we will add the following to the paper, directly referring to the existing proof of RL module convergence that we take as an assumption for further analysis. “[…] Haarnoja et al. [2018] prove (see Theorem 1 in the reference) that the learned RL policy with the Soft Actor-Crtic losses formulated in Eqs. (3) and (6), after repeated updates converges to a policy $\pi^\star$, such that no policy with a larger expected return than that of $\pi^\star$ can be found, i.e., $Q^{\pi^\star}(s_t, a_t) \geq Q^{\pi }(s_t, a_t)$ for all $\pi$ and $(s_t, a_t)$.” We would like to add that by “a better policy” in our response, we meant a policy with a higher expected return, and we extend our apology for using a vague/inaccurate term only to compress our response. We hope that the reviewer finds the updated response/action appropriate for the weakness pointed out. > **[Weakness 2]:** We agree with the reviewer’s both initial and the new comment on [Weakness 2]. Although we completely agree with its significance here, a theoretical closed-loop stability guarantee is very difficult to achieve where RL (since the agent is blind to the model) is involved. Therefore, we took the approach of demonstrating that the policy update will only improve or retain the expected return of the combined policy. This indicates that the proposed model is guaranteed (proved in Theorem 2 for the tabular case) to perform at least as well as (again in terms of the expected return) the initial regularizer module. We will clarify the above further in Theorem 2 and will state the weakness pointed out by the reviewer in the Conclusion section as a limitation of the approach. We have made an unintentional error in the rebuttal, which we understand sent an unclear message. We agree with the reviewer that model uncertainty and stability guarantees are not mutually exclusive, and our performance guarantee is simply approached from a policy improvement perspective rather than a closed-loop stability perspective.
Summary: The paper proposed a safe RL algorithm using adaptive regularization from model predictive controller. The regularization is implemented via weighted sum of a MPC controller and model-free RL policies. The weights is adaptive and learnable through a “focus network”. The focus network is updated through optimizing the expected Q function of weighted sum of actions. Experimental results shows that the proposed RL algorithm can learn a safe policy with zero constraint violations and outperforms MPC on the performance. Strengths: The idea is interesting to combine MPC as regularizers. The writing is clear and easy to understand. The experimental results looks promising under strict assumptions of known models. Weaknesses: 1. There is a key assumption that, the MPC policy $\pi_r$ via estimated model is safe, is too strict, not realistic and not mentioned explicitly. The authors claimed that the model is sub-optimal in line 31, which means the model is not accurate. The model mismatches usually cause constraint violations, but the experimental results in Table 1 showed no constraint violations for MPC policies. 2. The computational cost is too high, which makes the proposed algorithm not generalizable to nonlinear/high-dimensional inputs/ unknown models. Solving a MPC is very computationally heavy. The authors didn’t mention what optimization package, how many CPUs they used for solving MPC problem (7). For nonlinear system, MPC are not guaranteed to be solved in polynomial time and converged to global optimal solutions. For system with visual inputs and unknown models, the methods are not applicable. 3. The focus network update might lead to unsafe policy. Technical Quality: 1 Clarity: 4 Questions for Authors: 1. Can you clarify what types of references model $\tilde f$ you assumed for the regularizer policy $\pi_r$? See weakness 1 for the reason. 2. Why not compared to the Cheng et, al. as you already mentioned it? Now you only has one safe RL baseline CPO (and its variants). CPO is too old. 3. If you already has the safe model, why not just do CBF-QP on your learned policy? What is the potential advantages of the proposed algorithm 4. Please clarify the MPC solver and computational resources used for solving MPC. 5. For the focus network update, it seems like the focus network will finally choose $\beta(s) = 0$, which is the optimal policy. But the optimal policy might not be safe. What guarantees the safe policy is learned as you shown in Table 1? [1] Richard Cheng, Gábor Orosz, Richard M Murray, and Joel W Burdick. End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3387–3395, 2019a. [2] Ames A D, Coogan S, Egerstedt M, et al. Control barrier functions: Theory and applications[C]//2019 18th European control conference (ECC). IEEE, 2019: 3420-3431. Confidence: 4 Soundness: 1 Presentation: 4 Contribution: 1 Limitations: The authors addressed the limitation adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Weakness 1]**: The reviewer stated that assuming the existence of a safe MPC policy is too strict, not realistic, and not mentioned in the manuscript. **Response**: we believe that the reviewer may have missed a few points. First, this assumption is explicitly mentioned and discussed in the paper at **L335**. Second, the existence of an estimated model for MPC to generate a safe policy is realistic and reasonable, supported by the fact that MPC is arguably the most widespread method used for constrained control applications. Furthermore, to showcase the amount of uncertainty that our approach can withstand before failing, we test the sensitivity of RL-ACR with respect to the model accuracy. This is shown in **Fig. 4**, which shows that RL-ACR maintains safety even when model parameters have > 60% deviations. **[W2]**: The reviewer stated that we have not mentioned the optimization package, number of CPUs, etc. used for our experiments and that the high computational cost of MPC is a weakness of our work. Moreover, they state that the proposed approach is not applicable to systems with unknown models and visual inputs. **Response**: In **L560**, we reported that the average decision+learning time for RL-ACR is 0.037s on a single CPU, which is reasonable for most real-time control applications. It is explicitly mentioned in **L122** that we use do-mpc with the IPOPT solver package. The solver reduces the computational complexity by using the solution from the previous time step as the initial guess, leveraging the fact that MPC solves similar problems with slight variations at each time step. Moreover, while we agree with the reviewer that RL-ACR is not readily applicable to systems with unknown models and visual inputs, we want to stress two points: * Having an approximate model of the system at hand is the starting point of our work. This is a realistic assumption for many applications in real-world scenarios, where approximated models can be obtained by data-driven methods. * A very large class of problems does not rely on visual inputs. While RL-ACR could not be directly applied to systems that only rely on visual inputs (e.g., the ATARI suite), it can be applied to numerous other domains (e.g., medical applications, robotics, chemical processes, industrial settings). **[W3]**: The reviewer stated that the focus network update might lead to unsafe policy. **Response**: The focus network is iteratively updated using gradient ascent, which prevents $\beta(s)$ (the state-dependent focus weight) from a sudden decrease. This ensures regularization even if the learned RL policy is affected by overestimation error. The regularization from the safe policy is reduced to a small degree only after state $s$ is visited multiple times. **[Question 1]** The reviewer asked for clarification of reference(estimated) model used. **Answer**: The MPC reference models are widely adopted for safety-critical control applications [1]. In our work, the model parameters used to obtain the safe policy regularizer for Glucose and BiGlucose are estimated with real patient data, just as it can be practically done in real-world scenarios [2,3]. The environment models with their actual and estimated parameters are detailed in **Appendix B**. **[Q2]**: The reviewer suggested that we should have compared with an additional method (Cheng et al) and that our baseline (CPO) is too old. **Answer**: CPO is a well-established baseline used by most of the similar works in the recent literature (to give an example see the very recent paper [4]). Nevertheless, we compared our method to two additional more up-to-date baselines (Silver et al., 2018) and (Yu et al., 2022) in Figure R.I in the PDF attached with the global rebuttal. The results show that RL-ACR outperforms the new baselines, providing more evidence of the effectiveness of the proposed algorithm for safe control. **[Q3]**: The reviewer asked for the advantages of our method over CBF-QP. **Answer**: CBF-QP minimizes the control effort, without considering performance. The advantage of MPC is that it optimizes the predicted performance while maintaining safety constraints, resulting in a high performance of RL-ACR during the learning phase. Also, note that the theoretical safety guarantee of CBF-QP relies on the existence of a “perfect” model. **[Q4]**: The reviewer asked for clarification on the MPC solver and computation resources used. **Answer**: The original manuscript provides this information. MPC was solved using do-mpc, with its implementation based on CasADi and IPOPT (**L122** and **L197**). The average decision+learning time for RL-ACR is 0.037s on a single Intel Core i7-13850HX CPU (**L-560**). **[Q5]**: The reviewer raised a concern that after \beta(s) reaches 0 the proposed method might not be safe. **Answer**: In the problems, the policy that optimizes return is within the safe region by design. The state-dependent $\beta(s)$ will converge to 0 only for states where a better policy than the safe regularizer policy has been learned. If the exploration continues even after converging to the optimum policy and the system runs into less-exploited states, $\beta(s)$ will be non-zero and RL-ACR will apply safe regularization. $~$ [1] Sherr, Jennifer L., et al. "ISPAD clinical practice consensus guidelines 2022: diabetes technologies: insulin delivery." Pediatric diabetes 23.8, 2022. [2] Zahedifar, Rasoul, and Ali Keymasi Khalaji. "Control of blood glucose induced by meals for type-1 diabetics using an adaptive backstepping algorithm." Scientific Reports, 2022. [3] Hovorka, Roman, et al. "Nonlinear model predictive control of glucose concentration in subjects with type 1 diabetes." Physiological measurement, 2004. [4] Anderson G, Chaudhuri S, Dillig I. Guiding Safe Exploration with Weakest Preconditions. In The Eleventh International Conference on Learning Representations, 2023. --- Rebuttal 2: Comment: Thanks for the detailed rebuttal. ### Re to response to W1: First, L335 is the limitation section, you mentioned here does not make the assumption more reasonable. For the results in figure 4, parameter uncertainty is only one of many possible reasons for the model uncertainty. A safe MPC controller for uncertain dynamics is generally very difficult. ### Re to response to W2. Thanks for the clarification on the calculation time. I won’t call it fast (you might need to consider the computational limit on the real hardware, not all real-time control application has a CPU as powerful as an intel i7.) but it’s okay. ### Re to response to W3: What I mean is that you have a linear combination of MPC actions and RL actions. These two actions might be all safe, but there is no guarantee that the intermediate actions are safe, which is the key weakness of this paper. ### Re to response to Q1 and Q2, Q4: my concerns are addressed. thanks. ### Re to response Q3: I agree with the performance improvement. However, you said, > note that the theoretical safety guarantee of CBF-QP relies on the existence of a “perfect” model. The proposed method also requires a perfectly safe MPC controller, which is no less strict than the CBF QP. Moreover, even with such a controller, you cannot guarantee intermediate safety theoretically. ### Re to response to Q5: > In the problems, the policy that optimizes return is within the safe region by design. This is key information and should be highlighted in the manuscript. Then the proposed methods require a special reward design that the optimal policy is always safe, which is another limitation. Overall, and after reading reviewer zn6h’s concern about convergence, I will not change my score. I also encourage the authors to try to implement it on a real system if you do believe this is a reasonable way to do real-world safe exploration. The results will be much more powerful. --- Rebuttal Comment 2.1: Comment: We thank the reviewer for the response. ### Re Re to response to W1: We agree that the existence of a safe MPC is an important consideration for the application and we will highlight this in section 3.1 the Safety Regularizer (where we introduce the estimated mode). Although parameter uncertainty in our experiments reflects one of many possible reasons for the model uncertainty, it is commonly used in the literature to simulate model uncertainty that is encountered in applications; one example is [1]. ### Re Re to response to W3: The reviewer is correct, and we acknowledge that the safety of the intermediate action is not formally guaranteed. However, we made an addition to the manuscript following the feedback from you and another reviewer (t4ae), which may be of interest for your assessment. We have analytically quantified that the upper bound on the return deviation of the combined policy from the regularizer (safe) policy is proportional to $(1-\beta)\Delta a$, assuming the environment is Lipschitz continuous. This provides some theoretical insight on the safety of the combined policy. ### Re Re to response to Q1 and Q2, Q4: We are glad that these comments are addressed and appreciate the reviewer’s acknowledgement. ### Re Re to response Q3: As the reviewer identified, theoretically, both MPC and CBF-QP rely on the existence of a “safe” model. However, the fact that RL-ACR improves performance through the MPC objective justifies the use of proposed method over CBF-QP. We hope that this clarifies our original response. ### Re Re to response to Q5: We thank the reviewer for the valuable comment, and we will highlight this point in the paper. Also, we would like to point out, even reward designs where higher rewards correspond to preferable actions (in terms of safety) is common in the safe-critical applications (e.g., [2]). (In regulation tasks, for example, states closer to the target correspond to higher rewards and simultaneously states further away from the target are less safe states; thus increasing rewards corresponds to improving safety.) ### Re “after reading reviewer zn6h’s concern about convergence…”: We would like to mention that our initial response to zn6h’s included a vague term and raised a concern about convergence, but we have clarified this soon after in a follow-up response and made minor revisions to the manuscript to prevent any confusion. $~$ [1] Cheng R, Orosz G, Murray RM, Burdick JW. End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks. InProceedings of the AAAI conference on artificial intelligence 2019 Jul 17 (Vol. 33, No. 01, pp. 3387-3395). [2] Laroche R, Trichelair P, Des Combes RT. Safe policy improvement with baseline bootstrapping. InInternational conference on machine learning 2019 May 24 (pp. 3652-3661). PMLR.
Summary: In this article, the authors address the problem of safe reinforcement learning in single life setting. The agent must learn an optimal policy in one single episode without being unsafe throughout the learning. The proposed method relies on mixing a prior safe policy (the safety regularizer) with a reinforcement learning policy. The mixing is done through a linear combination of the actions proposed by the safety regularizer and the reinforcement learning agent. The weight put on either action is determined by a state dependent learned neural network. The safety regularizer is given by a model predictive control algorithm which assumes an estimated model of the environment is available. The RL agent follows the SAC algorithm. The authors show that the beta parameter is equivalent to a regularization weight that forces the learned policy to be closed to the regularizer policy. They evaluate their algorithm empirically on several simulated environments that have mathematical models. The proposed method is able to outperform the baselines in terms of both safety and return. I acknowledge reading the authors rebuttal. Strengths: The method is fairly easy to implement and is practical (under the condition that a safety regularizer exists). The overall idea is sound. The idea of mixing policies has been explored but they propose a principled way to learn the weights to give to both the safe policy and the learning policy which I think is useful. The idea of using MPC as a safe policy is original and interesting. The author provide lemma and theorem that help with the interpretation of the method. They show that the mixing parameter they introduce can be interpreted as a regularization weight. They also show that the policy should converge under the optimal solution as beta goes to 1. The method clearly outperforms the baselines in terms of safety level and provides better or equivalent performance in terms of return. The article is overall well written and easy to follow. Weaknesses: - It would have been useful to quantify the deviation of the current policy from the safety regularizer as a function of $\beta$. Since the method does not provide any guarantee on the safety level, it would be useful to quantify the amount of risk being taken by not following strictly the safe policy. - In the evaluation of the method, it would have been useful to add another baseline that takes advantage of the existence of a safe policy E.g. SPIBB (https://arxiv.org/abs/1712.06924), or residual policy learning (https://arxiv.org/abs/1812.06298). This last one would be a great baseline since they have a similar policy mixing concept but they don’t use any weight beta. I acknowledge that the author have pre-trained SAC and CPO on the estimated model which is a good baseline as well. - The framework is targeted towards “single life” settings, yet the training is episodic (figure 2). It makes it hard to conclude that the experiment are really evaluating the adaptability of the algorithm. - Some of the results lack discussion. In particular, the authors claim that RL-ACR has less variance in the returns, it is not very obvious from figure 2 in my opinion. The figure clearly show that SAC is less stable but it is not really discussed why. Since RL-ACR is using SAC at its core, I would expect SAC without the regularizer to perform at least as well in terms of returns (with the caveat of more failures). - The results shown in figure 3 are hard to evaluate as their lack some information about how the experiment is carried. Is it the performance in a single episode? Is it the same episode used for training? - In figure 5, the authors claim “$\beta_{\psi}(s)$ can apply different policy combination weights depending on how well the current state is exploited”. I agree that it is state dependent but there is no explicit term that takes into account the “exploitation” level of a state. The experiment is not showing that either I believe. $\beta$ is learned by maximizing the $Q$ function estimate. If a state has not been visited often its Q values might still be overestimated and lead to high $\beta$. Presentation: - The title and name of the algorithm is a bit misleading, I thought that the method would use techniques from the field of adaptive control https://en.wikipedia.org/wiki/Adaptive_control, there is no actual analogy in the paper. Technical Quality: 3 Clarity: 4 Questions for Authors: - Isn’t equation (10) equivalent to an actor loss in actor critic method? It could be useful to discuss a parallel here. - When updating $\pi_\theta$, what would be the effect of using gradient ascent on equation (10) instead? - Why is the performance of SAC worse than ACR in terms of normalized return? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The method requires the existence of a safety regularizer. In my opinion it is a fairly reasonable limitations and the method can still be used in many problems. The method does not provide any safety guarantee. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Weakness 1]**: The reviewer pointed out that it is useful to quantify the deviation of the combined policy from the regularizer policy. **Response**: We thank the reviewer for the interesting suggestion. We analytically quantified that for state $s$, the upper bound on the return deviation of the combined policy from the regularizer policy is proportional to $(1-\beta)\Delta a$, assuming the environment is Lipschitz continuous. The theorem and the proof sketch are given at the end of the rebuttal, and we will add the full proof to the paper. **[W2]**: The reviewer suggested adding another safe RL baseline. **Response**: We thank the reviewer for pointing out RPL as a baseline. RPL combines a safe policy with a learned residual policy and prioritizes the safe policy by initializing the residual policy as 0. However, the magnitude of the residual policy action can increase drastically. We have added RPL as an additional baseline, and our results show that RL-ACR is safer and more stable than RPL (see Figure R.I in the global rebuttal PDF). **[W3]**: The reviewer was concerned that the episodic training undermines the evaluation of the algorithms for the “single life” setting. **Response**: “Single life” describes the setting where no failure is tolerated during learning. Episodic training can be roughly regarded as introducing a disturbance every M step. Taking the Glucose problem, every episode is analogous to prescribing insulin dosages to a patient with off-balanced Glucose levels, and the single-life setting requires the algorithm not to fail in any (training or deployment) episode. Considering that RL-ACR instantly achieves the tasks, introducing further disturbances through episodic training adds difficulty to better evaluate its performance (as well as the baselines). **[W4]**: The reviewer raised that the results in Fig. 2 require more discussion as to why SAC shows larger return variances compared to RL-ACR. **Response**: We now add the discussion, which is covered in detail in the answer to **Q3**. **[W5]**: The reviewer pointed out that the experimental setting in Fig. 3 requires further explanation. **Response**: Fig. 3 shows the best performance after sufficient training, with methods deployed without exploration. We aim to show In Fig. 3: 1) how methods compare at their best, 2) RL-ACR improves the initial MPC policy, and 3) RL-ACR converges to a similar or stronger performance than standard SAC that disregards safety during training. We appreciate this comment and will clarify the above in the manuscript. **[W6]**: The reviewer questioned the effect of exploitation level on $\beta(s)$. **Response**: The $\beta(s)$ is updated iteratively using gradient ascent, thus the change in regularization level depends on the number of visits for state $s$ (besides the gradient). For any state $s$, the number of visits, which is the number of gradient ascent-based updates, affects the level to which the regularization level can be changed and thus reflects the exploitation level of state $s$. **[W7]**: The reviewer raised that the name of the algorithm might be confusing. **Response**: In the term “adaptive control regularization”, the “adaptive” describes “control-regularization”. However, we see the reviewer’s point that readers may think of “adaptive control”. We will change the term to “adaptive regularization”, to ensure clarity. **[Question 1]**: Discuss the similarity of Eq. (10) to the actor loss. **Answer**: We thank the reviewer for the comment and will add the following discussion: "Eq. (10) is similar to the actor loss in actor-critic methods. However, instead of updating the policy network, Eq. (10) updates $\beta(s)$ to find the optimal combination between the safe policy regularizer and the learned policy." **[Q2]**: What is the effect of using gradient ascent on Eq. (10) when updating $\pi_\theta$? **Answer**: Compared to Eq. (10), the RL-ACR loss has an additional entropy regularization term, which encourages exploring diverse actions and avoids sticking to the sub-optimum. Empirically, we tested the effect of using Eq. (10) to update $\pi_\theta$ (Figure R.III in the global rebuttal PDF), which showed reduced training stability. **[Q3]**: Explain the worse performance of SAC in Fig. 2. **Answer**: The difference between RL-ACR and SAC’s return variance comes from the regularizer in RL-ACR and its ability to prevent failure. Failure corresponds to a penalty affecting the return. SAC fails in some episodes (see the bottom panel of Fig. 2) and gets penalized, thus showing unstable returns compared to RL-ACR. As Fig. 3 shows, RL-ACR and SAC eventually converge to a similar return after sufficient training (as the reviewer identified). We will clarify the relationship between failure and observed return in Fig. 2. $~$ **Theorem**: (Deviation from the Regularizer) Assume $R(s, a)$ and $P(s'|s, a)$ are $L_R$- and $L_P$-Lipshitz continuous. For $\forall s \in \mathcal{S}$, the following holds for the regularizer policy $\pi_r(s)$ and the combined policy $\pi_c(s)= \beta \pi_r(s) + (1-\beta(s)) \pi_\theta(s)$: $|V^{\pi_r}(s) - V^{\pi_c}(s)| \leq \frac{(1-\gamma)|\mathcal{S}|L_R + \gamma |\mathcal{S}| L_P R_\text{max}}{(1-\gamma)^2} (1-\beta(s)) \Delta a$ **Proof Sketch**: Lipshitz continuous means: \begin{align} &|R(s, \pi_r(s)) - R(s,\pi_c(s_t))| \leq L_R (1-\beta(s)) \Delta a \\\\ &\|P(\cdot|s, \pi_r(s)) - P(\cdot|s, \pi_c(s))\|_r \leq L_P (1-\beta(s)) \Delta a \end{align} According to the vectorized Bellman equation: $v_r-v_c = (I - \gamma P_r)^{-1}(r_r - r_c + \gamma (P_r - P_c) v_c)$ Let $d_{., s}^T$ be the $s$-th row of $(I-\gamma P_.)^{-1}$ with entries $\leq 1/(1-\gamma)$. For $\forall s$: $|V_r(s) - V_c(s)| \leq |d_{r, s}^T (r_r - r_c)|+ \gamma| d_{c, s}^T (P_r -P_c)v_c|$. The proof follows by applying Holder's inequality to the two parts. --- Rebuttal Comment 1.1: Title: Thank you for the replies and for adding the RPL evaluation Comment: The replies to the questions were very clear. The experiment with RPL was useful to convince me of the advantage of the method. Regarding weakness 6, I understand that the more frequent states will have more effect on the value of beta as you will do more gradient updates that will have these states in the batch, but I don't think we can really get insights on what beta will be for those states. Did I miss something? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the constructive feedback and for acknowledging our addressal of the comments. Regarding the new question, the reviewer is right, and we clarify this in the discussion of Fig. 5 that “exploitation is not explicitly taken into account by $\beta_\psi(s)$ but more frequent states will have more effect on the value of beta, although the exact value of beta cannot be predicted”.
Summary: The paper proposed RL-ACR algorithm to solve safe RL exploration problem where the RL policy must be safe during and after training. It achieves this by learning a "focus network" which mixes the action from MPC-based safety regularizer and conventional RL module. The algorithm is claimed to address "single-life" safe RL applications where safety is critical even during training. Strengths: 1. Quality * The approach itself is sound and the paper provided proof that the combined policy (which mixes actions from two modules) is unbiased if the RL module converges to optimal policy. 2. Clarity * The paper is written in easy-to-understand manner and clearly illustrates how different components work together. Figures, equations and algorithm are all well-designed and well-written to support the entire paper. Weaknesses: 1. Originality * Most of the components proposed in RL-ACR have been explored previously: (i) combining two-prong policy [1], (ii) MPC with learned model [2, 3], (iii) safe RL with model rollout [4, 5, 6]. * The proposed focus network learns to outputs a scalar which is a mixing coefficient between the two policies. The focus network is similar to the learnable Safe Editor component in [1]. However, Safe Editor seems more sophisticated as it learns to edit the action itself. I wonder if there could be comparison between the two for evaluation. 2. Significance * RL-ACR relies heavily on the the estimated model $\tilde{f}$ for safety. Consequently, the reliability (and accuracy) of the model is an important factor to guarantee safety. As pointed out in [4], even for a highly accurate model, it becomes increasingly challenging to predict future trajectory with reasonably good accuracy when the model unrolling horizon becomes larger because the error cascades over many timesteps. In Eq7 of the paper, RL-ACR is required to perform N step model unrolling. I have some doubts on how to obtain a reliable model to support RL-ACR. * Related to previous point, in the conducted experiments, the model $\tilde{f}$ used in the paper is synthetically generated. It is not clear whether a model, which can predict lengthy future trajectory reliably, can be trained from ground-up. The tasks experimented (Glucouse, BiGlucouse, CSTR, Cart Pole) also do not seem to have long horizon or high dimension. Thus, I have reservation whether RL-ACR can be used to reliably solve safe exploration problems in more sophisticated tasks. 3. Baselines used in Experiments * The safe RL baselines used in experiments: SAC & CPO are neither model-based nor intended for safe exploration. SAC only learns to maximize reward and disregards safety. It is also unclear how SAC makes use of the model in SAC-pt. For CPO, although it is a safe RL algorithm, it is not intended to solve safe exploration problem. It is also not clear to me how CPO-pt exploits the model $\tilde{f}$ for this purpose. * Perhaps one possible baseline could be the safe editor in [1], which also claims to solve safe exploration problem? References [1] Yu, H., Xu, W. and Zhang, H., 2022. Towards safe reinforcement learning with a safety editor policy. Advances in Neural Information Processing Systems, 35, pp.2608-2621. [2] Nagabandi, A., Kahn, G., Fearing, R.S. and Levine, S., 2018, May. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 7559-7566). IEEE. [3] Chua, K., Calandra, R., McAllister, R., and Levine, S. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems. 2018. [4] Janner, M., Fu, J., Zhang, M. and Levine, S., 2019. When to trust your model: Model-based policy optimization. Advances in neural information processing systems, 32. [5] Clavera, I., Fu, Y. and Abbeel, P., Model-Augmented Actor-Critic: Backpropagating through Paths. In International Conference on Learning Representations. [6] Thomas, G., Luo, Y. and Ma, T., 2021. Safe reinforcement learning by imagining the near future. Advances in Neural Information Processing Systems, 34, pp.13859-13869. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. (Related to Weakness 3) Do you think safe editor policy in [1] can be a good a baseline for comparison? 2. (Related to Weakness 3) Do you have results which evaluate your approach in longer horizon (and higher state-action dimension) setting where it is more challenging to predict future trajectory reliably? 3. In your experiments, how do SAC-pt and CPO-pt make use of the model $\tilde{f}$? To my understanding, they are both model-free algorithms and do not exploit any dynamic model. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. The paper discusses one limitation which is having reasonably accurate model for control regularizer. It is pointed out in [4] that even for a model which can predict the next step reliably, it is very difficult to predict future N steps accurately since the error cascade over the prediction timesteps. The paper could possibly discuss this in detail. 2. Another point about learning the model is that it should be able to predict transition to unsafe states in order to support the safety regularizer. This inevitably means that there needs to be unsafe transitions being collected to train the model. How does this model collection fit into the entire safe exploration framework is an interesting area to discuss. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Weakness 1]**: The reviewer stated that some components of the proposed method have been previously explored. **Response**: While our work contains components of the suggested references, the resulting algorithm has a completely new approach to safety-critical control; in general, we believe that the combination of previously explored components does not take away from the originality of our work. Nevertheless, there are several key differences between our work and the mentioned references: i) Ref. [1] combines two learned policies, but RL-ACR regulates one learned policy with an established safe policy to ensure safety from the beginning of training. ii) Refs. [2, 3] use the MPC mainly to improve the “sample efficiency” rather than achieving “safe control” which is our aim; iii) In [4, 5, 6] the rollouts based on learned models only avoid unsafe actions after convergence, unlike RL-ACR that is designed to avoid safety violations from the beginning of training. **[W2]**: The reviewer raised that generating an estimated model for the safe MPC module is challenging. The reviewer was also concerned about whether the problems used in our experiments were complex enough. **Response**: For an environment with critical safety requirements, an estimated model (e.g. derived from limited observed data points) is sufficient for the safe MPC module of RL-ACR to perform very well. (One can think of the estimated model as a fit to the observed data points). This is shown in our Glucose and BiGlucose environments, where the model parameters are derived using real data from ~10 patients [7, 8]; this is practical in most real applications with limited data. We chose the environments to showcase the effectiveness of the method in “safety-critical” applications rather than focusing on environment complexity. Nevertheless, the BiGlucose model is generally regarded as a hard regulation problem, with 12 internal states (11 of them unobservable), 2 actions with large delays, and nondifferentiable piecewise dynamics. We have also provided further evidence of the effectiveness of RL-ACR in Acrobot, which is highly nonlinear and underactuated (see Figure R.II in the global rebuttal PDF). **[W3]**: The reviewer raised questions on the rationale for the selection of baselines SAC and CPO, and how SAC-pt and CPO-pt make use of the estimated model. **Response**: While it is true that SAC and CPO do not explicitly use models, the rationale behind our choice is threefold: i) SAC was chosen to show the optimal performance without considering safety. RL-ACR achieving stronger performance than SAC in Fig. 3 indicates that RL-ACR is not trading performance for safety. ii) CPO is a commonly chosen safe RL baseline that explicitly handles safety constraints during exploration (through CMDP formulation and recovery updates). iii) SAC-pt and CPO-pt are pretrained on the estimated model $\tilde{f}$ (which is used as a simulator of the actual environment); note that this is elaborated in the manuscript, see Line 217. This setting benefits SAC-pt and CPO-pt by allowing them to exploit the information in the available estimated model. **[Question 1]**: The reviewer asked if SEditor [1] can be used as a baseline. **Answer**: We thank the reviewer for pointing out this more up-to-date baseline and we have included it in our comparisons (see Figure R.I in the global rebuttal PDF). Results show SEditor from [1] does not guarantee safety during training because safety violations need to be observed before learning the cost return. SEditor also performs worse during training in 3 out of the 4 environments, potentially restricted by the constraint calculated from suboptimum cost estimation. **[Q2]**: The reviewer asked for results in environments where predicting the future is more challenging. **Answer**: We tested RL-ACR in Acrobot, where predicting the future is more challenging because of underactuation and high nonlinearity. Figure R.II in the PDF attached to the global rebuttal shows that RL-ACR maintains safety as only the first action in MPC rollout is adopted. Also, as mentioned in the response to **W2**, The Glucose and BiGlucose models are generally regarded as a hard regulation problem. The results demonstrate that RL-ACR is robust even against >60% model parameter mismatch (Fig. 4). **[Q3]**: The reviewer asked how SAC-pt and CPO-pt make use of the estimated model. **Answer**: SAC and CPO are model-free, thus not safe during training. We additionally generate SAC-pt and CPO-pt. which are pretrained using the estimated model $\tilde{f}$ as the simulator. This makes them more competitive by skipping the initial random trial-and-error. This also equips CPO-pt with safety knowledge through the pretrained cost value function, thus delivering more competitive safety performance upon deployment. $~$ [7] Zahedifar, Rasoul, and Ali Keymasi Khalaji. "Control of blood glucose induced by meals for type-1 diabetics using an adaptive backstepping algorithm." Scientific Reports 12.1 (2022): 12228. [8] Hovorka, Roman, et al. "Nonlinear model predictive control of glucose concentration in subjects with type 1 diabetes." Physiological measurement 25.4 (2004): 905. --- Rebuttal 2: Comment: I thank the authors for their succinct and to-the-point rebuttal. The comparison with SEditor (Yu et al., 2022) is helpful as it demonstrated that RL-ACR is more suitable for environment with critical safety requirement where an established safe policy is helpful to ensure that. Due to the new results, I'm willing to increase my final rating. UPDATE: Final rating updated.
Rebuttal 1: Rebuttal: ## Global Rebuttal We thank the reviewers for reviewing the paper and providing valuable feedback. We have provided point-by-point responses to the weaknesses and questions listed by the reviewers. Here, we list a few important notes on some of the comments (some leading to additional data/experiment) that may be of interest to others. 1. In the original paper, we deliberately included methods that are recent but “well-established” as baselines in our experiments. As suggested by the reviewers, we have added two additional, more up-to-date, baselines: (silver et al., 2018) and (Yu et al., 2022), compared to which RL-ACR shows stronger performance in both safety and performance (see Figure R.I in the attached PDF). 2. For experimental analyses, we chose safety-critical environments to showcase the important applications of our algorithm. This includes some challenging environments with nondifferentiable environment dynamics and large delays. We have now provided some additional evidence on the robustness of RL-ACR in a highly nonlinear and underactuated environment, the Acrobot (Figure R.II in the PDF). 3. As suggested by reviewer ta4e, we performed an additional ablation study on the effect of entropy regularization in the actor loss (Figure R.III in the PDF), which shows that entropy regularization encourages diverse policies, and avoids sticking to a suboptimum policy. 4. Although some of the technical components presented in our work are explored in the existing literature, our proposed algorithm is unique in its ability to avoid control failure from the first training episode while simultaneously converging to the performance standards of RL methods that disregard safety; this is crucial for many real-world applications and yet very rare in the literature. Pdf: /pdf/607ee95740abb29ce4568128d9fd4ed1271c4cc1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance
Accept (poster)
Summary: this paper proposes an exploration-enhanced equivariant graph neural networks. The effective utilization of samples has been achieved in the experimental results. and It is an improvement on the function approximation for MARL and thus is compatible with most MARL actor-critic methods. Strengths: The algorithm proposed by the author has been extensively experimented in MPE and SMACv2 environments, and its clear structure is a good paper. And this structure can be applied to other MARL algorithms Weaknesses: No Technical Quality: 3 Clarity: 3 Questions for Authors: No Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your response and praise in strengths! We added some additional plots in the global response. Let us know if you have any questions regarding those additional results.
Summary: The paper applies Equivariant Graph Neural Networks (EGNN) to address the issue of low sample efficiency in multi-agent reinforcement learning (MARL). The authors demonstrate that in certain scenarios, EGNN outperforms Multi-Layer Perceptrons (MLP) and Graph Neural Networks (GNN). Key contributions include showcasing EGNN's ability to handle complex action spaces and proposing methods to enhance the exploration capabilities of agents within this framework. The paper presents a limited number of experimental results, attempting to illustrate the superiority of EGNN over some methods in improving exploration efficiency in MARL tasks. Strengths: 1. The authors propose using Equivariant Graph Neural Networks (EGNN) to address the lack of sample efficiency in multi-agent reinforcement learning (MARL), providing insights to promote the application of EGNN in MARL further. 2. They identify the challenge of handling complex action spaces with EGNN and propose a solution to this problem. 3. Highlighting the lack of exploration capability when directly applying EGNN to MARL introduces a new intuition. Weaknesses: 1. The paper lacks baselines that utilize symmetry [1,2,3]. 2. There is a lack of entropy-based baselines to compare with for improving exploration capabilities [4]. 3. Not all tasks can leverage symmetry, so the paper needs to specify the applicable scope of the proposed method. 4. The innovation is unclear, as extensive work on multi-agent systems is already based on EGNN. Applying EGNN to MARL appears more like trajectory prediction [3]. If the system is controlled solely by displacements in the x and y directions, it essentially remains trajectory prediction, similar to EqMotion [3]. 5. The experimental descriptions are insufficient, lacking clarity on how features are designed, such as what constitutes equivariant and invariant features. Additionally, the use of symmetry is closely related to system state transitions, but the dynamics of these transitions are not introduced. 6. The approach to handling partial observability in multi-agent systems is unclear. 7. The reasoning for using zero-mean Gaussian policies is insufficiently explained. Moreover, if the goal is to enhance exploration, it is unclear why entropy-based methods are not used for comparison. 8. Due to the unclear method descriptions and the lack of open-source code, the reproducibility of the paper is poor. 9. Although EGNN can be applied in multi-dimensional spaces, the experiments are all conducted in two dimensions. [1] Muglich D, Schroeder de Witt C, van der Pol E, et al. Equivariant networks for zero-shot coordination[J]. Advances in Neural Information Processing Systems, 2022, 35: 6410-6423. [2] Yu X, Shi R, Feng P, et al. Leveraging Partial Symmetry for Multi-Agent Reinforcement Learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(16): 17583-17590. [3] Xu C, Tan R T, Tan Y, et al. Eqmotion: Equivariant multi-agent motion prediction with invariant interaction reasoning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 1410-1420. [4] Liu I J, Jain U, Yeh R A, et al. Cooperative exploration for multi-agent deep reinforcement learning[C]//International conference on machine learning. PMLR, 2021: 6826-6836. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. **Difference from Trajectory Prediction**: How does your application of EGNN in multi-agent reinforcement learning (MARL) fundamentally differ from trajectory prediction in multi-agent systems? 2. **Rationale for Zero-Mean Gaussian Policies**: Why is using zero-mean Gaussian policies considered best practice? 3. **Partial Observability Implementation**: How is partial observability handled in your experiment? Could you provide a detailed explanation of your method? 4. **Enhancing Exploration with Entropy-Based Methods**: Is it possible to incorporate entropy-based methods to enhance exploration in your EGNN framework? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors need to explore the scope of symmetry utilization to clearly articulate the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your response. We have updated our charts, and discuss the novelty further below. We hope you will consider increasing the score given the updates and content. 1. See the common rebuttal for an additional baseline. [3] is not a MARL algorithm. we attempted to replicate [2], but failed to see any improvement over standard PPO. [1] is only relevant to discrete action spaces and discrete observation spaces. However, we will add citations to [1,3] (we already cite [2]) in the related works section: with the following statement) 2. Entropy baselines, while potentially helpful are not the standard for MARL (see questions below), and our goal was to compare with standard MARL approaches. Additionally, our contribution was not limited to addressing the exploration bias of EGNN, it was also our contribution to apply EGNN to complex MARL environments leading to improved sample efficiency and generalization. 3. We believe many tasks will have some form of symmetry, but that symmetry may be inexact, or only partially symmetric. Our approach here cannot currently address those types of problems. However, we are excited to attempt to pursue this in future works. 4. This seems to be the first work applying EGNN to MARL, and the first work to successfully apply any equivariant neural networks to the common MARL SMACv2 benchmark. See questions section for discussion of trajectory prediction 5. These are common benchmarks in MARL. We did not design any new features, we used the standard outputs of these environments. 6. see lines 265-266 7. See below in questions. for discussion about the gaussian. The goal is not necessarily to only enhance exploration. The goal was to apply Equivariant Neural networks to MARL problems. In the process of using EGNN we noticed that it’s bias was causing a problem in exploration. Thus we had to modify EGNN to improve it’s performance in MARL. 8. We believe this approach to be fairly reproducible. We knew we may not get permission to release our code. To that end, we specifically used open source libraries for everything we did. The RL code is the open source library RLLIB. We used the code from the EGNN paper for the neural networks. The environments are also open source, as are the features of the environments. 9. We applied EGNN in 2 dimensions, because our problems (standard benchmarks in MARL literature) were in 2 dimensions. It could certainly be applied to more depending on the problem. Questions 1. MARL and trajectory prediction are related but fairly different. MARL is attempting to control a set of agents to achieve some objective. This entails learning a value function to approximate the value of each state in achieving that overall objective and a policy to control the agents to achieve that objective. Trajectory prediction is focused on predicting the next step of dynamics. Trajectory prediction is relevant to model-based RL algorithms. but in this case we are focused on model free algorithms 2. We used the gaussian output for the policy, because that is common practice in the field. Common RL code libraries such as stable baselines https://stable-baselines.readthedocs.io/en/master/, RLLIB all use gaussian parameterized policies. See also: ”...outputting the mean of a Gaussian distribution, with variable standard deviations, following [Sch+15b; Dua+16]” from original PPO paper ”The two most common kinds of stochastic policies in deep RL are categorical policies and diagonal Gaussian policies.” https://spinningup.openai.com/en/latest/spinningup/rl intr We wanted our approach to be as broadly applicable as possible so wef ocused on common practices. 3. see lines 265-266 4. I think that is an interesting idea, and one worth exploring in future efforts. --- Rebuttal Comment 1.1: Comment: After carefully reading and reconsidering the manuscript and the authors' responses to my initial comments, I maintain that the paper requires significant improvements in several key areas: 1.Comparison with Reference [20]: Both this manuscript and reference [20] utilize equivariant neural networks for multi-agent systems. However, reference [20] exhibits greater theoretical depth and technical maturity. Specifically, [20] more thoroughly addresses the symmetry issues in multi-agent systems and utilizes SEGNN, which is more advanced than the EGNN employed in this paper. Consequently, the contributions in terms of innovation and technical depth appear insufficient. 2.Limitations of EGNN: The paper highlights deficiencies in EGNN's exploration performance, a valuable observation. However, the modification proposed in the paper results in the loss of translational equivariance properties. The current treatment and analysis do not adequately resolve or mitigate this issue. 3.Lack of In-depth Analysis on Exploration Performance: Although the paper discusses exploration performance issues, it lacks a thorough comparison with existing entropy-based methods. Such comparisons are essential for a comprehensive evaluation of the method's efficacy presented in the paper. Additionally, the summary of related work is not sufficiently comprehensive, failing to highlight the distinctions and advantages of this research over existing studies. Based on these considerations, I maintain my initial rating. Should the authors address these issues, I will consider raising my score. --- Rebuttal Comment 1.2: Comment: Thank you for your time in reviewing this paper. We believe we addressed these concerns with our rebuttal and responses. In particular, we demonstrated that our approach greatly outperformed [20], with up to 2x the final win rate (on protoss and terran), more than 5x sample efficiency (on protoss and terran), and 4x faster learning time in hours (on MPE). We discussed why translation equivariance is a hindrance to MARL. Finally, we discussed that the MARL literature largely uses gaussian policies and how entropy based MARL exploration is neither state of the art nor common (especially for PPO). Please see our larger official comment to these concerns (posted yesterday) for more details. In your comment you mention you will consider raising your score if we address these concerns, please let us know if there are any other concerns you may have. Thanks! --- Rebuttal 2: Comment: Thanks for responding! 1. Note that our paper was written contemporary to [20]. We had much of our results before they published. However, we did add an empirical comparison with [20] and note that our approach E2GN2 (see the pdf in the common rebuttal) greatly outperforms E3AC [20] on all of the SMAC benchmarks. On terran and protoss in particular, we more than double their final win rate (~0.6 vs ~0.3), and have much faster learning: ie E2GN2 learns a win rate of 0.3 in near 1e6 time steps, while E3AC [20] takes 1e7 time steps to learn that win rate. On MPE [20] is much slower, taking 4 hours to train. We believe our improvements are significant (due to using EGNN, E2GN2 and section 4.3) and should be shared with the community. Much of the analysis in [20] is only a small extension from prior analysis in Van der pol's work. Van der pol established the theory for adding equivariance to multi-agent MDPs (albiet very simple grid world environments), the work in 20 did not add too much to this in terms of theory. Since van der pol had established this theory we chose not to focus on it as it would not be novel. Instead we focused on more practical applications and improvements. Regarding SEGNN, it is not necessarily an improvement over EGNN as it is very slow. We initially decided to use EGNN specifically because SEGNN and related networks (ie those using spherical harmonics, wigner-D matrices, etc) tend to be slow. Looking at the original SEGNN paper it is 10x slower than EGNN. MARL experiments are already too slow, so we did not want to exacerbate this problem. In our global response we did observe that this results in much slower training times for the results from [20] Note that [20] was not able to get competitive results on SMAC. We were able to greatly improve learning on SMAC due to our approach described in section 4.3. Additionally, this solution in section 4.3 is not to only change the movement actions to be continuous, it is to decompose the discrete action outputs such that each node for each enemy is outputting one component of that action. This is crucial for retaining the GNN locality structure and allowing us to scale agents without retraining. 2. It is true that E2GN2 loses translation equivariance guarantees. However it retains translation invariance guarantes* see below. For MARL policies translation invariance is useful [see 20] but translation equivariance is harmful. We certainly observed that in practice translation equivariance was harmful (for one our results demonstrate E2GN2 out performs EGNN). As a simple example. if you have an agent at the position (0,-1) Perhaps the optimal action for agent 1 is to move up: (0,1). Now shift this down by 100, so now this agent's position is at (0,-100). A translation equivariant network will shift the action by -100, leading to the action of (0,-99). This means the agent will always move down, which is not optimal! This is not desirable for many MARL environments. When testing in MPE and SMAC, this may not cause a large problem, as the inputs are often normalized to be near (-1 ,1), but it can still potentially affect performance. * we can add a full proof to the appendix, on E2GN2 being translation invariant (just not equivariant), but briefly, we did not modify the equation for computing $h_i$ and are still using the output of $h_i$ for the invariant components. If we translate the input by $b$, that will translate the output by $\phi_u(m_{ij}) b$ ie $u_i^l + \phi_u(m_{ij}) b = u_i^{l-1} + b$ then the output of $h_i^{l+1}$ is translation invariant if $|| u_i -u_j || $ is translation invariant, and $|| u_i + \phi_u(m_{ij}) b - (u_j^{l} + \phi_u(m_{ij}) b ) || $ = $|| u_i -u_j || $ so we retain translation invariance. 3. For the citation concerns, we have mentioned we will add the citations you mentioned here (as well as others, some reviewers mentioned). We will add the content from these reviews going more in-depth on the differences between our on [19,20,21] (similar to the global rebuttal). We are not certain we understand the rest of this comment. We were not seeking to change the exploration structure of the MARL learning problem. We were seeking to ameliorate a problem we noted in the EGNN structure (that of bias). We presented an in depth analysis of this bias and compared the EGNN vs E2GN2 across a variety of benchmarks (figure 3,5,6) demonstrating that the exploration bias of EGNN caused worse performance. There are indeed many methods to attempt to improve a RL algorithm's exploration performance (such as the many papers on curiosity driven exploration). However, our focus was on modifying EGNN to improve common practices in MARL environments/bechmarks (stable baselines, rllib, etc). To that end, we focused on using the common gaussian parameterization of the policy (see our previous comment, Q2 for those citations). EGNN's biased will likely harm any biased exploration. --- Rebuttal 3: Comment: 3 continued) Note also that the paper in question [1a] regarding entropy-based exploration does not outperform the benchmarks on dense-rewards, it only outperforms the benchmarks on sparse-reward environments. All of our environments (the standard benchmarks in MARL) have dense rewards, which seems to see little to no improvement from their approach. Furthermore, they do not show how to apply it to actor-critic algorithms such as PPO (what we use here and is one of the best/fastest commonly used algorithms for MARL [2a, rllib]) Entropy-based exploration seems rather niche and has not become a mainstay or mainstream, especially not in MARL. Most of the state of the art papers and libraries use gaussian parameterized policies [2a, rllib, stablebaselines, etc]. 1a Cooperative Exploration for Multi-Agent Deep Reinforcement Learning 2a The surprising effectiveness of multi-agent PPO
Summary: This paper studies an intricate issue, called “early exploration bias”, when applying EGNN, an GNN preserving Euclidean invariance/equivariance, to cooperative MARL that exhibits Euclidean symmetries. The paper reveals that, with randomly initialized weights, the output of the EGNN layers is not centered around zero, which can cause difficulties for cooperative MARL. The paper then proposes a convenient solution to this issue, with a cost of removing translation invariance/equivariance. Strengths: - The paper is overall easy to follow in terms of its motivation. - The issue of early exploration bias is well-described using an example (Figure 3), discovering an intricate issue when applying EGNN to MARL. Weaknesses: - Euclidean equivariance in deep MARL has been explored in several recent works, namely [20] and [Esp: Exploiting symmetry prior for multi-agent reinforcement learning]. This paper’s novelty against these recent works is limited. The most significant contribution seems to be limited to the pathology of EGNN, fixing it to overcome the “early exploration bias” issue. It is unclear whether this pathology and treatment is even relevant in other types of equivariant NNs (e.g., [20] uses the alternative of SEGNN). Moreover, the paper suggests its method can deal with discrete movement actions at the beginning of Section 4.3, which could be a significant novelty from prior work. However, it turns out that the paper bypasses this issue by changing the movement actions in the original SMACv2 from discrete to continuous. - The method is motivated by the “early exploration bias”. Its cause and the proposed remedy which seems very specific to the proposed usage of EGNN and particularly specific to (MA)RL (e.g., similar issues could arise in ML for chemistry). The paper might benefit from an investigation and discussion of EGNN used in other domains/applications. - The presentation of the theoretical results (the Theorems and Corollaries) and the proposed methods lacks clarity. Please refer to the Questions on this. - This paper’s argument on translation equivariance is not that convincing. The proposed method (E2GN2) removes translation invariance/equivariance from EGNN and the paper argues (e.g., Lines 213-218) that translation invariance/equivariance is not a benefit for MARL. Lines 213-218 uses an example to argue against translation equivariance, while previous work (e.g. [20]) suggests translation invariance is indeed benefiting MARL, supported by both theoretical and empirical results. So, the paper could benefit from a revised approach that preserves translation invariance, not equivariance. - There are minor issues regarding the quality of writing, e.g., - There is an unwanted line break between lines 110 and 111. - Line 122-123: $T_g$ should be $Y \to Y$ and $L_g$ be $X \to X$. - Theorem 1: the output vector should be $u_i^{l+1}$. - The format of the references is sloppy, e.g. [2, 19, 20, 22] Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In Figure 1, why is “All learnable equivariant functions constrained by data” not a subset of “All learnable equivariant functions”? 2. The original EGNN paper [2] uses it for applications other than RL. Is EGNN’s property presented in Theorem 1 also problematic for the applications there? 3. On the theoretical results: - 3a. In Theorems 1 and 2, what is exactly approximated/hidden in the approximate equalities? - 3b. In Corollary 1.1 and 2.1, can you clarify the use of subscripts $i$ vs $k$? - 3c. In Corollary 1.1 and 2.1, what’s the definition of $s_k^{eq}$? - 3d. In Corollary 1.1 and 2.1, there seems to be a hidden assumption: $a_i$ has the same dimensionality with $s_k^{eq}$, according to the displayed equation? 4. How exactly is a policy represented by EGNN? - 4a. How is the input graph defined/constructed? - 4b. What is the dimension of $u^l_i$ for all l? - 4c. According to Section 3.3., the dimension of $h^l_i$ is $p$, how to map this $p$-dimensional vector $h_i$ to invariant action components (e.g., logits for the attack targets in SMACv2)? How to ensure such a mapping is equivariant/invariant? 5. The paper does not talk much about value function approximation. How exactly is the value function represented by EGNN? 6. On the experiments: - 6a. What specific GNNs are used in the experiments? - 6b. The experiments use a decentralized critic (IPPO?). Why not use a centralized critic (MAPPO), which is shown in previous work to perform better? - 6c. In Line 287-293, why is “permutation equivariance” relevant? What aspect(s) of the proposed method “allow for” the GNN to scale to larger numbers of agents, in a way that a vanilla GNN cannot? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The paper mentions several future work directions, including addressing partial observability, exploring other domains like robotics. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough, in depth response! Our replies are below: Weaknesses 1. Note that we added a new comparison with [20] demonstrating ours outperformed the results from [20]. Regarding SEGNN, we initially decided to use EGNN specifically because SEGNN and related networks (ie those using spherical harmonics, wigner-D matrices, etc) tend to be slow. Looking at the original SEGNN paper it is 10x slower than EGNN. MARL experiments are already too slow, so we did not want to exacerbate this problem. However, we did add a global response, addressing adding SEGNN from [20] to our comparisons. Regarding the action space, please see the global rebuttal response. We showed this method is still relevant to SMAC when using discrete actions. Note that E2GN2 and EGNN outperform the baselines in with the discrete action space as well. Our method in 4.3 was crucial for achieving this result. Note that [20] was not able to get competitive results on SMAC. Additionally, this solution in section 4.3 is not to only change the movement actions to be continuous, it is to decompose the discrete action outputs such that each node for each enemy is outputting one component of that action. This is crucial for retaining the GNN locality structure and allowing us to scale agents without retraining. 2. I agree it would be interesting to explore the use case in other domains.. However, our focus in this paper is improving MARL, so perhaps we can explore this in the future. 3. Okay. We will update our paper accordingly 4. The results from [20] demonstrate that translation \textit{invariance} is important. Our observation is wrt translation \textit{equivaraince} We certainly observed that in practice translation equivariance was harmful. As a simple example. if you have an agent at the position (0,-1) Perhaps the optimal action for agent 1 is to move up: (0,1). Now shift this down by 100, so now this agent's position is at (0,-100). A translation equivariant network will shift the action by -100, leading to the action of (0,-99). This is not desirable for many MARL environments. When testing in MPE and SMAC, this may not cause a large problem, as the inputs are often normalized to be near (-1,1). Questions 1. Thank you for catching this. The orange circle should say "All learnable functions constrained by data". 2. This is an interesting question to explore in the future. 3. On the theoretical results 3b I see how that is confusing. We will update to use just $i$ instead of $k$ 3c The definition of $s_k^{eq}$ is the component of the action space that is equivariant, we will formalize this definition. 3d Yes, that is an assumption that originates with EGNN. 4. How a policy is represented: 4a) The input graph is described in appendix B (we will change the word choice from "connections" to "edges"). For MPE we use a complete graph. For SMACv2 we use a graph such that it is complete between all friendly nodes. Each friendly node has an edge to each of the enemy nodes. There are no edges between enemy nodes. This was an attempt to model what we imagine a real world type world to look like (you see the enemies, communicate with your allies, but don't see the enemies communications). 4b) This is partially in appendix B (we reference the width of the MLPs as 32), we can improve the wording by referencing the specific notation, and add details for each environment. For the intermediate GNN layers $u^i_l$ has a dimensionality of 32. For MPE the input $u^0_i$ is the id (pursuer, evader, or landmark). For SMACv2 $u^0_i$ is made up of the features: health, shield unit type, and team. 4c) This is described in section 4.3 and is key to our approach. For smacv2, the discrete logits for attack targets are the outputs of each node \textit{corresponding to the attack target}. The logit for attacking agent j is the output for $h_j^L$. Since the output of each individual node $h_j^L$ is invariant, these will then remain invariant. These logits are concatenated from each of the nodes to compose the final action. This allows us to scale to larger numbers of agents without retraining. 5) We did not spend much discussion on the value function approximation, as it has been discussed in prior works [19,21]. The value function is fairly straightforward, we simply used an EGNN/E2GN2 for the value function. The ouput of the own node's invariant component was the value function output ie $u_i^L$ for agent $i$. We will add this to appendix B 6) On the experiments 6a) We will add this to the appendix. We used the same GNN as the EGNN paper: Specifically, we use equations 1 and 3, but update equation 1 to be $m_{ij} = \phi(h_i, h_j, x_i,x_j)$. 6b) A standard GNN layer is permutation equivariant [24]. It also has a local structure that enables adding/removing nodes without needing to retrain the layers. For example, [1a] demonstrates using a GNN to train N agents, then to control $N+n$ agents without retraining. However, this example did not operate with discrete or mixed action spaces as we do with SMAC. Our formulation in 4.3 enables us to retain the permutation equivariance of a GNN. An alternative to our approach is to add an MLP at the end of the GNN to convert the GNN outputs to the mixed action spaces. This will lose the scalability/locality of the neural network. For example, if you add two more enemies how do you modify the final MLP to expand the action space? In our formulation, whenever an new enemy $i$ is added to the environment, we can simply add the node $N+1$ to the graph. The action space of the GNN will now be supplemented with $u_{N+1}^L$ allowing us to expand the action space without retraining. New Citations: [1a] Learning Transferable Cooperative Behavior in Multi-Agent Teamhttps://arxiv.org/abs/1906.01202 (GNN) --- Rebuttal Comment 1.1: Comment: Thanks for your response. It seems that Questions 3a and 6b was not included in your rebuttal, if you would still like to provide a response. Thanks. --- Rebuttal 2: Comment: (continuing discussion on point 1 under weaknesses) To further clarify, the issue with the discrete action space was not necessarily maintaining exact continuous equivariance (we will clarify this in 4.3). The problem is that the SMACv2 action space has an equivariant component (movements) and an invariant component (shoot). The outputs for the equivariant component of EGNN/E2GN2 are continuous values, mapping these values to logits is not straightforward. Furthermore, they would need to be mapped onto the same distribution of logits being represented by the shoot commands. We solve this problem by having three distributions output by the GNN structure: the continuous/equivariant movement gaussian distribution, the discrete/invariant distribution, and a third distribution that determines whether we should move or shoot (described 290-292). After the RL sampling from each distribution is performed for the actions, then the three components of the action can be converted to the final action for either a mixed discrete-continuous action space or a discrete action space. --- Rebuttal Comment 2.1: Comment: In your added experiments, did you also use this three-distribution technique for E3AC? Also, is the third distribution (move or shoot) equivariant/invariant? --- Rebuttal 3: Comment: Thanks for the reminder. Here are the responses to those points. 3a. See lines 457 and 469 (I tried copying it here, but the latex kept breaking) 6b That is a good point to bring up. For clarity, the difference between MAPPO and PPO is that MAPPO uses a centralized critic that takes in the observations and actions of all agents. We believe E2GN2 would improve MAPPO, and we hope to eventually apply this to MAPPO. We use independent PPO because 1) MAPPO performance can vary depending design of the centralized critic [see figure 4 in 1a]. 2) For many cases IPPO is comparable to MAPPO [1a] Since we used parameter sharing [2a], all of our agents were using the same critic. Since we had fully observable environments, the critic had all observations as an input. So using IPPO with parameter sharing and full observability is a fairly close approximation. 3) We wanted to use existing commonly used public libraries to increase reproducibility and utility to the community. We selected RLLIB since it seemed to be a popular benchmark, RLLIB does not have MAPPO built-in, it must be added on your own and implementations can vary (reducing reproducibility) One more clarification on point 4 of the weaknesses section. E2GN2 is still translation invariant (just not equivariant). We can add a full proof to the appendix, but briefly, we did not modify the equation for computing $h_i$ and are still using the output of $h_i$ for the invariant components. If we translate the input by $b$, that will translate the output by $\phi_u(m_{ij}) b$ ie $u_i^l + \phi_u(m_{ij}) b = u_i^{l-1} + b$ then the output of $h_i^{l+1}$ is translation invariant if $|| u_i -u_j || $ is translation invariant, and $|| u_i + \phi_u(m_{ij}) b - (u_j^{l} + \phi_u(m_{ij}) b ) || $ = $|| u_i -u_j || $ so we retain translation invariance. 1a https://arxiv.org/abs/2103.01955 2a https://arxiv.org/abs/2005.13625 --- Rebuttal 4: Comment: For E3AC, we used the model and observation processor from the E3AC github repo, so no, E3AC was not using the three-distribution technique we introduced here (since that was part of the novelty of our work). They did not have an observation processor for SMAC in their git repo. We wrote the SMAC observation processor by modifying their MPE observation processor according to the details in their appendix. Note that since E3AC was unable to resolve the difficulty with the SMAC action space (using SEGNN), they opted to use an MLP for the policy and an SEGNN for the value function. The third distribution is invariant, the logits are output from $h_i^L$ for agent $i$ (ie the invariant part of the output node corresponding to that agent) https://github.com/dchen48/E3AC --- Rebuttal Comment 4.1: Comment: I would raise my rating from 3 to 4, considering the authors have made faithful efforts (especially the additional experiments) to address my comments. However, I am still inclined to recommend a reject, because most of my significant concerns are not yet addressed: The paper made two contributions: (i) correcting EGNN’s idiosyncrasy that causes “early-exploration bias” (sections 4.1 and 4.2) and (ii) dealing with “complex action spaces” (section 4.3). - For (i), the paper does not justify why we should stick with EGNN. Switching to a better equivariant architecture should solve the problem. Afterall, the choice of architecture is just a hyperparameter on top of prior work like [20]. The EGNN’s idiosyncrasy is totally independent of MARL, which is why I encouraged the authors to investigate EGNN in other applications/domains. - For (ii), it is a separate contribution from (i), as the “three-distribution” technique can be applied to architectures other than EGNN as well. It seems a valid novelty (prior work like [20] does not address this well), but it seems too incremental to justify an accept. Besides, the paper should benefit from a more clear and rigorous presentation of its theoretical results and method description. --- Reply to Comment 4.1.1: Comment: Regarding the performance of E2GN2 on other applications. We were also curious and tested E2GN2 on the QM9 problem (from EGNN paper). On several QM9 tests, we saw little to no change in performance. We believe this is because the QM9 test predicts invariant quantities. Our modification to EGNN will primarily affect the equivariant component. We did not see an improvement on the n-body trajectory prediction problem either. E2GN2 does not help for trajectory prediction type problems because on those problems, biasing the output by the position is a desirable trait (ie it approximates kinematics, numerical solvers, etc). These were just preliminary tests. Furthermore, it is important to consider the translation equivariance property of E2GN2 (again we will add the above discussion on translation equivariance to the appendix). In trajectory prediction translation equivariance is helpful (perhaps why E2GN2 does not improve the n-body task). In MARL translation equivariance is undesirable and harmful. Indeed simply being translation equivariant causes some bias to the network: as an example, for a policy network $\pi(.)$, it is translation equivariant if $\pi(x+b) = \pi(x)+b$, so translation equivariance does introduce a bias to the controls. We believe that is partly why E2GN2 greatly outperforms SEGNN on the Simple Tag problem: an SEGNN policy will have some measure of bias to the output due to the translation equivariance. Of course, each of these architectures may have more or less bias, depending on each specific formulation. So one of these other architectures may perform better than EGNN, but they will likely have some measure of bias, as they were all designed for a different problem space than MARL/controls. Although there may be other options, we selected EGNN because it had competitive results on supervised learning benchmarks, and it was much more computationally efficient. Our results and comparisons seem to indicate this was a fruitful decision. --- Rebuttal Comment 4.2: Comment: As the discussion period is coming to a close we would like to thank you again for taking your time in reviewing this paper. We wanted to aggregate our recent comments into a single comment and add a couple of points. To summarize responses regarding the innovations in the paper (see previous comment for more detail) For i: We selected EGNN over other architectures due to computation speed. It takes 40 minutes to train vs SEGNN taking 4 hours for E2GN2 to train, this is why we stick with EGNN and then need to fix the exploration bias. We also note that E2GN2 outperformed both EGNN and SEGNN on MPE Tag. For ii: We believe this novelty is important, even if it is not overly complex. It was critical for using Equivaraint GNNs in a challenging MARL benchmark, and led to enormous improvements in SMAC Terran/protoss. Many AI innovations are simple in concept, but large in impact. SMAC is a much more complex MARL problem due to heterogenous agents with different capabilities, action spaces, etc. It requires complex coordination and cooperation between agents. (see previous comment for more detail). https://www.youtube.com/watch?v=5mUqtGir4e0 https://www.youtube.com/watch?v=VOdiYB3Ut8I (example videos) iii: We believe a third novelty of this paper was applying EGNN to MARL. This did not exist in the literature prior, and EGNN is more feasible than SEGNN (or related formulations) for usage in MARL due to the quicker training time. This may be the key to broader usage of equivariant networks by the MARL community. One quick note (re-novelty) we had much of our results before [20] was published, and did not find it until much of our paper was written. --- Rebuttal 5: Comment: Thank you for the response and for upgrading the score! We appreciate you taking the time to carefully consider this work. To clarify are the significant concerns the two listed in your comment? (i and ii?) For i: Our greatest rationale for using EGNN (over SEGNN or other architectures) is the speed in computation (see the charts in the main rebuttal pdf) it takes 40 minutes to train vs SEGNN taking 4 hours for E2GN2 to train. This makes a big difference for MARL practitioners. This is why we stick with EGNN and then need to fix the exploration bias. We also note that while SEGNN outperformed EGNN in the original SEGNN paper, here we see that for MPE simple tag EGNN seems to outperform SEGNN. For ii: while this novelty may be fairly straightforward, we have demonstrated it was critical for MARL in scalability and performance on the much more complex SMAC environment (and opening the door for other MARL envs with discrete or mixed action spaces and both equivariant and invariant components). There are many novelties (such as the resnet paper with their skip connections) that were simple in concept, but very important for improving performance, as we see in this paper iii: We believe a third novelty of this paper was applying EGNN to MARL. This did not exist in the literature prior, and EGNN is more feasible than SEGNN (or related formulations) for usage in MARL due to the quicker training time (in hours). This may be the key to broader usage of equivariant networks by the MARL community. One quick note (re novelty) we had much of our results before [20] was published, and did not find it until much of our paper was written. --- Rebuttal Comment 5.1: Comment: I wanted to add a little bit more on why SMAC is such a big leap from MPE. MPE has homogenous units with the same capabilities and is primarily focused on cooperative navigation. In SMAC, the units are heterogenous with different capabilities (different attack ranges, total health, and sometimes action spaces). The unit types are randomized at the beginning of the scenario. The actions include more components than simply movement (such as in MPE), agents can move and attack. Some units can heal instead of attack, some units simply explode, others can target multiple enemies. The goals are more complex as well. Instead of simply navigating cooperatively as in MPE, the agents must learn attack formations and strategies. Sometimes it may be optimal to sacrifice health or allies in the purpose of the greater strategic objective. https://www.youtube.com/watch?v=5mUqtGir4e0 https://www.youtube.com/watch?v=VOdiYB3Ut8I (example videos)
Summary: This paper studies the setting of multi-agent reinforcement learning (MARL). The work tries to tackle the challenges of generalization and sample efficiency using inductive biases. In this case, the work proposes to use equivariant graph neural networks to model the policy and value function of a multi-agent actor critic. The work points out a bias in the output of such networks that harms exploration and proposes an architectural fix. It also proposes an architectural modification that will allow the usage of discrete actions in addition to continuous ones. The approach is evaluated on two different benchmarks and in some cases shows improved sample efficiency and final return compared to an MLP and graph neural network baseline. Strengths: I would like to preface this review by stating that I am not very familiar with the multi-agent RL literature. My estimation of the novelty of this approach has high uncertainty. However, I am very familiar with the standard RL literature. Problem statement * Understanding what types of inductive biases can lead to improved sample efficiency in reinforcement learning is in general and interesting and important problem given the high cost of sampling in the real world. Clarity * The text is well written and easy to follow. * The results are presented in a easy to parse manner * Multiple little example plots are spread throughout the paper to increase readability. Novelty * From a brief lit review and given that the work builds on a relatively recent neural network architecture and that other works are concurrently trying to integrate this architecture into MARL agents, I'm willing to believe that fixing the exploration bias and integrating multi-modal action prediction are novel contributions. Method * The proposed method uses a well-reasoned fix for a problem in an existing architecture to solve the exploration bias problem. This makes for an interesting investigation. Related work * The paper highlights several other works that use equivariance in MARL. Weaknesses: Clarity * In the abstract the main claims are 10x improvements in sample efficiency and final reward. This is just unnecessarily exaggerated and possibly misleading. I think the paper would benefit from a clear depiction of what is actually shown. * The mathematical depiction of the model is unclear and needs to be improved * Notation is in places not defined or ill defined. E.g. * inputs to equations 1, 2, 3 are unclear to me * Some symbols are not defined, e.g. X and Y in section 3.2 The function $\phi_e$ is defined wrt one input in L137 but takes multiple inputs in Eq 1. * A viasualization of equations 1, 2, 3 would improve readability. This could have been integrated with Figure 4 to not take up more space. * Shaded regions in experiments are not explained. I'm assuming they are standard errors. * In the examples for early exploration, a description of the MDP is required. It was unclear to me what states, actions and rewards are. * Theorem 2 is missing a pointer to a proof. Related work * As mentioned before I am not an expert in the MARL field. The following might not be the most meaningful metric but I figured I'd point it out for completeness. In general, the number of cited papers in the manuscript is low compared to many other works. I'm not going to base my recommendation on this point because a small number of precise citations can be sufficient but I'm pointing this out in case reviewers more familiar with the literature have similar concerns. That is, I don't necessarily expect more citations to be added to raise my score. Novelty * The text argues that this work is the first work to successfully demonstrate multi-agent equivariant networks on complex tasks but point to some work whose tasks are not complex. It is unclear to me what the measure of complexity is and when a testbed would be sufficiently complex to make this claim about novelty correct. See Q1. Experiments and Claims * Several claims are overstated and need to be adjusted or clarified * The work claims that the method outperforms standard GNNs and MLPs by more than 10x sample efficiency. First, this is not validated in the experiments. I do not know what I should look at to get this 10x number. Further, it seems that this is not generally true since in some cases the benefits are marginal (see Figure 6 right) and needs to be rephrased. * In section 5.2, the text states that "equivariant networks have a clear improvement in sample efficiency over the MLP and GNN". This is not fully supported and needs to be concretized. In Figure 6 (right) the statement is incorrect. * The experiments in Figure 6 are somewhat inconclusive. The final performance of E2GN2 is within variance of EGNN for all three experiments while also being conducted over only 5 seeds. For protoss and zerg, the whole training seems to be within variance, challenging the sample efficiency claim. Increasing statistical significance of these results would make the paper stronger. * The experimental results in Table 1 and 2 lack a measure of variation. Given that other results seem to be within variance, these measures should be added. * There is a lack of baselines that compares to prior work. It is, for instance, unclear to me why theoretical guarantees are useful if other approaches might be better. A comparison against the architectures from [19, 21] may have strengthened arguments for being to first to handle complex action-spaces. * The paper states that the base models were chosen and special tricks from the literature were avoided L255 but that the proposed method "is compatible with most MARL actor-critic methods" L78. First, the paper does not demonstrate that the latter claim is true. Second, I think the paper would have been stronger if it had been shown that the method work *with* these tricks and stronger methods. In general, we have seen (at least in the single agent literature) that combining many advancements is quite beneficial [1 mine, 2 mine, 3 mine]. This would also solve the problem of the baselines being weak. [1] Combining Improvements in Deep Reinforcement Learning. Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver. The Thirty-Second AAAI Conference on Artificial Intelligence, 2017. [2] Bigger, Better, Faster: Human-level Atari with human-level efficiency. Max Schwarzer, Johan Samir Obando Ceron, Aaron Courville, Marc G Bellemare, Rishabh Agarwal, Pablo Samuel Castro. Proceedings of the 40th International Conference on Machine Learning, PMLR 202:30365-30380, 2023. [3] Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control. Michal Nauman, Mateusz Ostaszewski, Krzysztof Jankowski, Piotr Miłoś, Marek Cygan. arxiv eprint 2405.16158. I am happy to slightly raise my score if the language of the claims is adjusted, limitations are addressed and the experiments are made statistically significant. I am happy to raise my score more if additional baselines are included to strengthen claims and the generalizability to stronger approaches is demonstrated. Technical Quality: 2 Clarity: 2 Questions for Authors: Q1: Can you elaborate what makes your environments more complex than the ones used by Van Der Pol et al. [19]? Q2: Can you elaborate on why previous work such as that by Van Der Pol et al. [19] does not observe the exploration bias? Q3: How does EGNN in the experiments do action prediction with discrete actions? Does it adapt the suggested solution? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The limitations of the work have not been addressed properly. I believe that this would be quite beneficial since the chosen experiments clearly have a structure that benefits from the imposed inductive bias. This does not mean that any claims that are being made can be true **in general**. It is unclear to me whether these methods still function in environments that are not hand-picked for the inductive bias at hand. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review. We appreciate the time you took to dive into the details of our work and provide specific advice for improving these results. We tried to take your feedback into account in our updated results. We increased the number of seeds up to 10, and we added a new baseline (see the global rebuttal for those charts) from [20]. We will remove the language regarding "up to 10x improvement in sample efficiency" and replace it with simply "in many cases we see a significant improvement in sample efficiency over GNN and MLP networks". Perhaps that is worded better? Clarity Thank you for this feedback. We will make the recommended changes to the mathematical depiction, improve the definitions for equations 1,2,3 and add the theorem 2 pointer to a proof. Experiments: * On further baselines: We addressed the concerns regarding the baselines [19, 20,21] in the global rebuttal. We added the baseline [20] to our comparisons, and discussed further [19,21]. Let us know if you have any other thoughts on these. Before the original submission we did attempt to use [21] as a baseline, but when we tried to implement their approach we did not see any improvement over the standard PPO approach for MPE. (we tried various hyperparameters, and learning rates as well). The experiments in [19] are similar to a grid-world type environment. The input to the traffic control is a 8x8 grid and the cars move one grid per timestep. Their wildlife monitoring environment is a 7x7 grid where the agents move around the grid. Extending their specific approach from a gridworld (discrete number of states) to SMACv2 would likely require further innovations. * On the shaded regions in the experiments: in the submitted paper these regions are the 95% confidence error calculations computed by bootstrapping using python's seaborn. Per your comment on standard errors, the updated plots (see global rebuttal) use standard error instead. * On the number of seeds, we now have 10 seeds for each run. We note that due to the higher run time of MARL experiments, other MARL papers use around 10 seeds [1a,2a] Q1: On why SMACv2 is more complex: the units are heterogenous with different capabilities (with different attack ranges, total health, and sometimes action spaces). The unit types are randomized at the beginning of the scenario. The actions include more components than simply movement (such as in MPE), agents can move and attack. The goals are more complex as well. Instead of simply navigating cooperatively as in MPE, the agents must learn attack formations and strategies. Sometimes it may be optimal to sacrifice health or allies in the purpose of the greater strategic objective. (see lines 278-286) Q2: Van der Pol's approach is to find the basis for a specific group (ie rotations). Once the find the basis they make the neural network weights a linear combination of the basis vectors. I don't believe their network would have a bias, but it has the limitation of being applied only to discrete grid-world like environments (see global rebuttal for a further discussion on this). Q3: In the experiments on SMACv2 (using mixed discrete-continuous actions) EGNN does indeed adapt the suggested solution in 4.3, as does the GNN network. In fact, using the method described in section 4.3 is why GNN was able to increase the number of agents without retraining (Table 2) Limitations We plan to add a further discussion on limitaitons. It is true that the more symmetry in the environment the greater the improvement we would see from EGNN/E2GN2. I will note these environments weren't necessarily hand-picked. MPE and SMAC are the standard/go to environments for MARL. For example, [1a] uses SMAC and MPE and it has 1000 citations. The SMACv2 paper identified problems with the original SMAC environment, which is why we use SMACv2. That said, E2GN2/EGNN could be overly restrictive in environments where symmetries are inexact or unclear. If EGNN is assuming an exact symmetry, but that symmetry is inexact, this could cause a loss in performance as the inductive bias is keeping the network from learning the optimal solution. Other citations 1a https://arxiv.org/abs/2103.01955 2a https://arxiv.org/abs/1706.05296 --- Rebuttal 2: Comment: As the discussion period is coming to a close we would like to thank you for your time in the original review. To recap, we have added further discussion on the related works, added more seeds to our experiments, and added a related baseline [20] which our approach continues to outperform. We are eager to hear if our additions have addressed your concerns, and if there is anything else to add to this discussion. Although we added more seeds to the training charts, we had not yet updated the tables with more seeds (and standard errors). Here we present these: Table 1 (with 10 training seeds) | Environment | Network | Surrounded Left (Training) | Surrounded Right (Testing) | Surrounded All (Testing) | | ----------- | ------- | ------------- | ------------ | ------------ | | terran | E2GN2 | .57±.01 | .55±.01 | .63±.01 | | terran | GNN | .43±.02 | .07±.01 | .27±.02 | | terran | MLP | .33±.02 | .12±.02 | .24 ±.02 | | protoss | E2GN2 | .59±.01 | .56±.02 | .57±.02 | | protoss | GNN | .44±.02 | .08±.01 | .23±.01 | | protoss | MLP | .42±.02 | .17±.02 | .27±.02 | | zerg | E2GN2 | .34±.02 | .3±.02 | .31±.02 | | zerg | GNN | .37±.02 | .06±.01 | .18±.01 | | zerg | MLP | .24±.02 | .04±.01 | .12±.01 | Table 1 (with 10 training seeds) | | | 5 Agents (Train) | 4 Agents (Test) | 6 Agents (Test) | 7 Agents (Test) | 8 Agents (Test) | | ------- | ----- | ---------------- | --------------- | --------------- | --------------- | --------------- | | Terran | E2GN2 | .69+.02 | .65+.02 | .63+.02 | .62+.02 | .54+.04 | | | GNN | .48+.02 | .45+.02 | .45+.02 | .40+.02 | .39+.02 | | Protoss | E2GN2 | .62+.03 | .61+.02 | .59+.03 | .47+.04 | .37+.03 | | | GNN | .38+.03 | .24+.02 | .35+.03 | .28+.03 | .2+.02 | | Zerg | E2GN2 | .36+.03 | .32+.03 | .31+.04 | .23+.01 | .18+.03 | | | GNN | .33+.04 | .29+.03 | .31+.03 | .29+.03 | .27+.03 | Furthermore, one question raised was whether the tips and tricks from [1a] would improve the performance of our approach. First, we clarify that we largely used the majority of their tips and tricks, especially relevant to performance (3 of 5 of their tricks were using a large training batch size, few numbers SGD of iterations, and small clip size. We used all of these). Due to space in the rebuttal, we couldn't include this explicitly, but if you look at figure 1 from the rebuttal pdf and compare that to figure 5 from the paper, you may notice a difference between the performance of E2GN2 and EGNN. In Figure 1 of the rebuttal, all of the approaches used value normalization from [1a], in the paper that is not necessarily true. In fact, value normalizaiton was critical for E3AC learning at all. Thus we do observe that adding value normalization does indeed improve the results for MPE tag. We did not see an improvement with any other environment (and [1a] didn't see an improvement for other environments either). Finally, in our blurb on limitations below we discussed how equivariance may be impartial or incomplete in the real world. We also want to add that our approach would need improvements to be applied to environments with angular momentum and mechanics, which are not included in these benchmarks (though to reiterate, these are the standard MARL benchmarks). Note that the total improvement from these approaches will likely depend on the amount of rotational symmetry applicable in the observations. Additionally, this approach is not directly compatible with environments using vision-based observations. We note the reviewer initially wrote they would raise the score if we adjusted the language, addressed limitations, and added more seeds. The reviewer also mentioned raising the score more if we added a new baseline to strengthen the claims and the generalizability to stronger approaches is demonstrated. We hope we addressed your concerns sufficiently, let us know what other concerns you may have. 1a the surprising effectiveness of Multi-agent PPO --- Rebuttal Comment 2.1: Comment: Dear authors, I appreciate the thorough response and am glad you found some of my feedback useful and integrated it. "We will remove the language regarding "up to 10x improvement in sample efficiency" and replace it with simply "in many cases we see a significant improvement in sample efficiency over GNN and MLP networks". Perhaps that is worded better?" Yes, this wording is more appropriate. One could also talk about specific points in time or similar if that feels appropriate. I encourage the authors to make such adjustments where needed so that the claims accurately reflect what is being shown. Thus, my concerns with respect to phrasing of the claims have been partially addressed. I'm saying partially because this really needs to be done for all the claims where required. I'm going to be optimistic and assume that the authors would in fact do this for the final version of the paper even though they have not presented all the rewording here. "Per your comment on standard errors, the updated plots (see global rebuttal) use standard error instead." I was not making a comment about which measure to choose but rather about the fact that you did not explain which one you were using. Bootstrapped confidence is just fine but the text should state that. "On the number of seeds, we now have 10 seeds for each run." These results are more convincing since the gap between methods is more pronounced. For environment complexity, the problem is that I might say: "Previous approaches can not land a rocket on a moon and thus the work did not use sufficiently complex environments." I think being specific about the environment differences in the paper would be beneficial to support the claim of environment complexity. ""Our approach is the first to successfully demonstrate Equivariant GNNs for MARL on standard MARL benchmarks with complex action spaces." The text that the authors point to here does not talk about action or observation spaces. Something like the response to Q1 might be useful to specify what exactly the claim is. I'm going to assume that the authors can include a similar paragraph in the next version of the paper. The new baseline results also put the work into a better perspective with respect to other approaches. Overall, I think the changes improve the presented manuscript significantly and I think at this stage it would be fine to accept this paper. I'm updating my score to 6. --- Reply to Comment 2.1.1: Comment: Thank you again for your review and also for increasing the score. It was very helpful to have specific actionable items to pursue in improving our paper. We believe this review helped improve the paper and strengthen the results. Yes, we will update all of the claims to make them more precise and specific. In particular, the abstract, introduction and results section. We will update the comment in the results from "equivariant networks have a clear improvement in sample efficiency over the MLP and GNN" to "equivariant networks demonstrate improved sample efficiency over the MLP and GNN in the protoss and terran SMACv2 environments". After looking at the SMACv2 description in the paper, we agree that the description here is more useful for describing the complexities of this environment. We will add that to the description of SMACv2 in the paper. We will also update references to 'complex action space' or 'complex environments' to be more specific ie: 'mixed discrete-continuous action spaces', and/or 'environments with multi-tiered objectives, that require learning strategies to coordinate heterogenous agents and capabilities'. (Here by multi-tiered objectives we mean that in SMACv2 there are secondary-objectives of killing individual agents and not losing your own agents, vs the overall objectives of winning the scenario) Thanks again for your time and effort in contributing to the AI/ML community.
Rebuttal 1: Rebuttal: We would like to thank all of their reviewers for their thoughts, comments and advice for improving this research work. We know how busy you all are, and we are grateful you have taken time out of your schedule to provide a thorough and fair review. We are encouraged they found appreciated our insight into describing and adressing the bias of EGNN in MARL, to improve sample efficiency in MARL. Based on the below feedback, we added an additional benchmark to our comparisons [20].If accepted we will update the plots with these included here. We will refer to [20] as E3AC (referring to their title).* As before, we ran [20] using RLLIB's PPO to ensure we were using a common benchmark, we trained using the hyperparameters listed in appendix B. Although E3AC is competitive with EGNN on MPE simple spread, it fails to perform as well as E2GN2 in the simple tag. Additionally, although their approach is sample efficient, it is very slow and cumbersome to run. This makes running, tuning and devloping E3AC fairly difficult This is one of the reasons why we chose to use EGNN over other equivarant grpah neural networks: EGNN is much quicker and simpler to run. Further, as we mentioned in our original paper, E3AC fails to bring the benefits of increased sample efficiency to the more complex problem posed by SMACv2. The results of E3AC are comparable to an MLP. E3AC seems to perform slightly worse on the Zerg environment. We believe this may be due to the fact that some of the zerg agents have more complex action spaces and dynamics (ie some of them explode on contact). Other benchmarks suggested for comparison include [19, 21]. For [19], their method is specifically formulated and tested on small discrete grid world problems with simple dynamics and discrete up, down,left actions. For example, the trafic control problem has an input of a 7x7 grid. Extending their work to continuous environments with large state spaces, large mixed discrete/continuous action spaces was not straighforward without modifications. For [21], in the past we attempted to replicate their methods , but we failed to see any improvment over PPO on the MPE environments, let alone the more difficult SMAC environments. Several reviewers were curious about how our results would fare on SMAC with the default discrete action space instead of the mixed continuous discrete space. We present those results here. To map the E2GN2 output to discrete, the continuous movements are mapped back to discrete actions by simply rounding to the nearest discrete action \textit{after} performing exploration (ie sampling from the gaussian distribution). The remaining actions are already discrete. We present those results here as well. We believe this emphasizes the importance of our method described in 4.3, in enabling EGNN structures to mixed discrete-continuous and fully discrete action spaces. Pdf: /pdf/1ed7c11b41e43ddc2bfacb0b20b223fa7984bbaa.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper demonstrates for the first time the successful application of Equivariant Graph Neural Networks (EGNNs) to standard MARL benchmarks with complex action spaces. To address the Early Exploration Bias, the paper provides proof of the biases in standard EGNNs and the unbiased nature of E2GN2. In practice, it is achieved by adding an additional multi-layer perceptron (MLP), which helps to offset the bias from the previous layer's output, leading to actions that are not biased towards the agent's current position. E2GN2 provides a method to handle complex and heterogeneous action spaces, which are common in MARL environments. It leverages the GNN's graph structure to output different components of the action space from different nodes in an equivariant manner, allowing for a more flexible and scalable approach to handling mixed discrete/continuous action spaces. The paper shows that E2GN2 outperforms standard GNNs and MLPs by more than 10 times in sample efficiency on common MARL benchmarks like MPE and SMACv2. It also demonstrates greater final reward convergence and up to 5 times gain in generalization over standard GNNs. Strengths: This paper proposes a novel method with interesting insight and sound theoretical analysis. Weaknesses: see questions part Technical Quality: 3 Clarity: 3 Questions for Authors: The paper introduces Exploration-enhanced Equivariant Graph Neural Networks (E2GN2), a novel architecture that significantly boosts sample efficiency and generalization in Multi-Agent Reinforcement Learning (MARL) by leveraging equivariance to symmetries and addressing early exploration biases. However, I still have some concerns: 1. For the complex action space issue, the detail of the method is not enough. How do you map the embedding to action space, such as an additional MLP layer? For the value head, does it share the same embedding with the policy head? Section 4.3 and Fig. 4 provide the high-level idea but ignore the details. 2. For the experiment part, this paper seems to use independent ppo as the baseline, which is not widely used in most MARL papers. Although the improvement is significant compared with ppo, it would be better to apply the E2GN2 to other MARL methods. 3. Another issue is the input of the network. To use the EGNN, it seems that the states of all agents are required. However, we assume that only partial observation can be obtained in most MARL settings. For example, we cannot get the observation of opponents in SAMC. Please provide more details of the implementation. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments. 1. There is no MLP on the output of the GNN, EGNN, or E2GN2. Adding an MLP to the output would cause us to lose the guarantee of equivariance as well as the ability to add more agents without retraining. The output is similar to what is shown in the diagram. It is composed of outputs from various nodes. The outputs all come from the $L$th layer of the network. For agent $i$, the movement component of the action will output from the own node's equivariant component $u_i^L$. For SMAC, the invariant component of the action (ie the logits for determining which target to attack) will be output from the invariant component of that target's node: $h_j^L$ where $j$ is the potential target/enemy. (see lines 232-239 and 290-293). I hope that helps clarify things. (we will add this to Appendix B), we use a separate networks for the policy and value function. The value function output is comes from the invariant component of the agent's node of final layer of the EGNN/E2GN2. In other words the value function is: $h_i^L$ Where $L$ is the final layer, $i$ and is the agent in making the decision. 2. That is a good point to bring up. We believe E2GN2 would improve MAPPO, and we hope to eventually apply this to MAPPO. We use independent PPO because 1) MAPPO can be more difficult for one to one comparisons [1], as it depends on how the centralized critic is shaped. 2) for many cases IPPO is comparable to MAPPO [1] 3) we wanted to use common public libraries for more reliable open source comparisons. we selected RLLIB since it seemed to be a popular benchmark, RLLIB does not have MAPPO built in, it must be added on your own, and implementations can vary. 3. This is an input variable to SMACv2. A user can change the variable 'partially\_observable' to False to make it fully observable. Note that we did also do a preliminary comparison with partially observable observations (per SMACv2 standard) and saw surprisingly good performance (see appendix) Citation 1) The surprising effectiveness of multi-agent PPO --- Rebuttal Comment 1.1: Comment: One other note regarding point 2 Since we used parameter sharing [ ie 2a], all of our agents were using the same critic. Since we had fully observable environments, the critic had all observations as an input. So using IPPO with parameter sharing and full observability is a fairly close approximation. 2a https://arxiv.org/abs/2005.13625
null
null
null
null
null
null
Adversarially Robust Dense-Sparse Tradeoffs via Heavy-Hitters
Accept (poster)
Summary: The authors present an algorithm for adversarially robust $L_p$-estimation estimation in the turnstile streaming model, which improves on the space-complexity of existing algorithms in the regime $p \in (1, 2)$. They achieve this improvement via an alteration of the dense-sparse tradeoff technique of the existing state-of-the-art algorithm where they show it is sufficient to track the heavy-hitters while keeping track of an estimated residual. In addition to the improved Lp-estimation algorithm, the results include an adversarially-robust streaming algorithm for the heavy-hitters problem along with an adversarially robust algorithm for estimating the residual of the frequency vector. The authors also include experiments providing support for their theoretical improvement guarantees. Strengths: - The authors do a great job of situating themselves in prior work, and explaining exactly how their results differ and improve. - While I am not well-versed in this area, the results seem correct and were clearly explained and decomposed. Weaknesses: - The actual improvement over the previous work seems very minimal, i.e. a tiny reduction in the space complexity on a very small portion of possible choices of p. However, the relation to heavy-hitters seems like an interesting and non-trivial insight used to achieve this improvement, and the algorithms for heavy hitters and residual estimation seem to me to be of independent interest. - The authors do a great job at explaining the intuition behind their techniques and how they fit with existing works. However, I found the actual preliminaries to be extremely confusing. The paper would greatly benefit from a clearer introduction on the exact setting and definitions of the various variables. I found the introduction of https://arxiv.org/pdf/2003.14265 to be very helpful in clarifying my confusion. Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the variable b that is referred to in Algorithm 3 (in the parameters for LZeroEst and ResidualEst)? Perhaps I am missing something but I don't see where it is defined. - I was a bit confused on the purpose of the empirical evaluations. It seems that you demonstrate that empirically in one real-world instance, the flip number of the residual vector can be significanly smaller than the flip number of the entire datastream. However, if I am understanding correctly, your algorithmic guarantees depend on the worst-case flip-number of the residual vector, and are not adaptive to the actual flip number. How should I interpret these empirical results with respect to the performance of your proposed algorithm? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the authors have adequately addressed limitations and the assumptions made in their results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The actual improvement over the previous work seems very minimal, i.e. a tiny reduction in the space complexity on a very small portion of possible choices of p. However, the relation to heavy-hitters seems like an interesting and non-trivial insight used to achieve this improvement, and the algorithms for heavy hitters and residual estimation seem to me to be of independent interest. Indeed, we do agree that in some settings, the quantitative improvement is not large. However, we emphasize that optimal bounds for adversarial insertion-deletion streams is a major open question and therefore, the main strength of our work is the conceptual message that the despite the lack of recent progress, previous results are not optimal. Thus we hope our work will inspire future research in this direction. > The authors do a great job at explaining the intuition behind their techniques and how they fit with existing works. However, I found the actual preliminaries to be extremely confusing. The paper would greatly benefit from a clearer introduction on the exact setting and definitions of the various variables. I found the introduction of https://arxiv.org/pdf/2003.14265 to be very helpful in clarifying my confusion. Thanks for the suggestion. We will expand the preliminaries to reiterate the model described in Section 1 (lines 50-67) with the discussion specifically centered around the $L_p$ heavy-hitter problem and/or the $L_p$ norm estimation problem. > What is the variable b that is referred to in Algorithm 3 (in the parameters for LZeroEst and ResidualEst)? Perhaps I am missing something but I don't see where it is defined. Similar to Algorithm 1, $b$ is the number of queries made to the oblivious algorithms. Since the stream has length $m$ and the stream is split into blocks of length $\ell$, then there are at most $\frac{m}{\ell}$ queries made to each of the algorithms, as in Algorithm 1. We will explicitly clarify the setting of $b=\frac{m}{\ell}$ in Algorithm 3. > I was a bit confused on the purpose of the empirical evaluations. It seems that you demonstrate that empirically in one real-world instance, the flip number of the residual vector can be significanly smaller than the flip number of the entire datastream. However, if I am understanding correctly, your algorithmic guarantees depend on the worst-case flip-number of the residual vector, and are not adaptive to the actual flip number. How should I interpret these empirical results with respect to the performance of your proposed algorithm? Yes, that's a fair point. However, we remark that the algorithmic guarantees of the previous results also depend on the worst-case flip-number of the entire vector, so in some sense we are still comparing apples to apples by comparing the empirical flip-number of the residual vector to the empirical flip-number of the entire vector. In particular, the algorithm must still commit to some budget for the flip-number before the algorithm is initialized and our empirical results show that the same budget can handle significantly more updates for our algorithm. --- Rebuttal Comment 1.1: Comment: Thanks for your response! I will tentatively raise my score to a 7: Accept.
Summary: In adversarially robust streaming, one wants to design streaming algorithms that work well even in the interactive setting, in which the stream is not fully fixed in advance, but is constructed element by element by an adversary who can see the current solution provided by the streaming algorithm. If the algorithm is randomized, each intermediate solution may reveal some information about the internal randomness of the car, which could allow the adversary make updates that break the estimates of the algorithm. The paper concerns two fundamental problems in this setting: heavy hitters and moment estimation. In particular, the paper improves on the best known algorithm for moment estimation that considers two regimes, sparse and dense vectors. The paper shows that estimating $L_p$ moments for $p \in [1,2)$ can be done in smaller space than previously known. The achievement follows by leveraging known deterministic algorithms for heavy hitters and using them for tracking significant changes to the frequency vector. Strengths: * Insightful contribution to a very active research area. * Non-trivial improvement for some of the most popular streaming problems: heavy hitters and moments. The combination of different ideas to make this work is not easy. Weaknesses: The moments/norms for which the paper improves the space requirements are not the most important ones, which are I think are $p=0$ and $p=2$. Technical Quality: 4 Clarity: 4 Questions for Authors: "Unusual" values of moments can be used for computing entropy (see Harvey, Nelson, Onak 2008). Do you think this could be used here to provide further motivation for your results? Your result heavily relies on Ganguly's heavy hitters result. I tried looking at this and related papers but I didn't have enough time to read them. Those are not the most frequently cited papers. Did you maybe verify their correctness? Are your results in the general turnstile model (where frequencies can get negative) or the strict turnstile modeli (where deletions are allowed, but all frequencies are always non-negative)? In particular, this goes back to the results of Ganguly. Which of the models are they for? Because if they are only for strict turnstile, then the case of $p=1$ is trivial. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: No direct negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The moments/norms for which the paper improves the space requirements are not the most important ones, which are I think are $p=0$ and $p=2$. In general, a common goal is to find the heavy-hitters above a desired threshold that may be arbitrary. In this case, we want $|x_i|>\tau$ for some threshold $\tau$. We can then choose the values of $\varepsilon$ and $p$ accordingly, so that $\tau=\varepsilon\cdot\|x\|_p$ so that the $L_p$-heavy hitters correspond to the items whose frequency are above the threshold. > "Unusual" values of moments can be used for computing entropy (see Harvey, Nelson, Onak 2008). Do you think this could be used here to provide further motivation for your results? Yes, entropy estimation is certainly an additional application for our results, particularly since the previous streaming algorithm utilizes $F_p$ moment estimation for $p\in(1,2)$. Thanks for the pointer! > Your result heavily relies on Ganguly's heavy hitters result. I tried looking at this and related papers but I didn't have enough time to read them. Those are not the most frequently cited papers. Did you maybe verify their correctness? > Are your results in the general turnstile model (where frequencies can get negative) or the strict turnstile modeli (where deletions are allowed, but all frequencies are always non-negative)? In particular, this goes back to the results of Ganguly. Which of the models are they for? Because if they are only for strict turnstile, then the case of $p=1$ is trivial. Ganguly and Majumder's heavy-hitters algorithm uses Chinese remainder codes in their Lemma 9 statement and can be stated for the general turnstile model. Importantly, their Lemma 11 does not give an $\Omega(n)$ lower bound for the general turnstile model for the heavy-hitter problem because it only applies for heavy-hitters problem "with parameter $s$" with $s=\Omega(n)$, where the additive error for each estimate is $\frac{||x||_1}{s}$ and the universe has size $n$. Furthermore, their work is subsequently extended to general error correcting codes in [NNW12], which actually achieves a slightly better space bound in logarithmic dependencies. Specifically, [NNW12] constructs a deterministic matrix $A$ with Johnson-Lindenstrauss properties, so that maintaining $Ax$ for the underlying frequency vector $x$ is amenable even in the general turnstile model. [NNW12] Jelani Nelson, Huy L. Nguyên, David P. Woodruff: On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation. APPROX-RANDOM 2012: 627-638 --- Rebuttal Comment 1.1: Comment: Thank your for the response and reassuring me about the correctness of papers by Ganguly et al. I'm happy to improve my score to 7 (Accept). I still stand behind my conviction that the cases of $p = 0$ or $p =2$ would be more interesting. As for the entropy application question, let me just warn you about a subtle point if you decide to mention it in your paper. For a fixed constant $p$, you usually do not care if there is a multiplicative factor that depends only on $p$ in the complexity of estimating $\ell_p$. In the entropy application, you are not considering, however, a fixed $p$, but you select $p = 1 + \delta$, where $\delta$ is some function of $\epsilon$ and perhaps parameters of the stream. Then the final complexity will have an additional multiplicative factor that depends on $\delta$ and you have to make sure it's not prohibitively large. This is usually not what people focus on for moment estimation, but for standard moment estimation algorithms, you get at a factor of at least $1/\delta$ I think. You would have to check the literature devoted to fractional moment estimation and see what impact (if any) this has on the complexity of deterministic heavy hitters you use.
Summary: This paper focuses on the L_p estimation (of the frequencies of items in a stream) in the adversarially robust streaming setting. The previous work by Ben-Eliezer, Eden and Onak achieved an $\tilde{O}(m^{p/2p+1})$ space, which is slightly better than the $P(\sqrt{m})$ space bound due to the flip number technique by considering a sparse-dense tradeoff. The authors of this work question whether the above bound is tight or if it can be improved. The authors present an algorithm that beats the bound of $\tilde{O}(m^{p/2p+1})$. This is obtained by first building an adversarially robust streaming algorithm for L_p heavy hitters, utilizing deterministic turnstile heavy-hitter algorithms with better tradeoffs. This is then combined with another algorithm that estimates the frequency of the tail-vector, and has additive error and space independent of the size of the tail. Strengths: As the paper mentions, this is a conceptual breakthrough, showing that the previous bound of $\tilde{O}(m^{p/2p+1})$ is not tight. The paper explains all the ideas pretty well, and it was easy to read. The novel insight here is that in the dense-sparse tradeoff of [BEO22], in order to change the p-th moment by at least a $1+\epsilon$ factor, most of the updates are to a small number of coordinates, which then naturally must have either already been heavy-hitters, or became so after the update. The authors are able to effectively handle the had case of [BEO22], resulting in a better improvement. Weaknesses: The only obvious weakness is that the result is a very slight improvement, sometimes in the third decimal place. While the insight is nice, the techniques are a linear combination of different algorithms for different cases. While I did not find the techniques very exciting, I think this passes the bar for NeurIPS. Technical Quality: 3 Clarity: 3 Questions for Authors: Do you see any potential barriers to this approach? A discussion on what the limits are (is the bottleneck the heavy hitters algorithm or the tail-estimation algorithm?) would be greatly appreciated. This will also give the reader the assurance that this minor improvement is still significant as it pushes a new idea to a reasonable extent. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Apart from the limitation described in the theorem statement, the authors could discuss limitations of this approach, and what the current barrier is. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The only obvious weakness is that the result is a very slight improvement, sometimes in the third decimal place. While the insight is nice, the techniques are a linear combination of different algorithms for different cases. While I did not find the techniques very exciting, I think this passes the bar for NeurIPS. We do agree that in some settings, the quantitative improvement is not large. However, we emphasize that optimal bounds for adversarial insertion-deletion streams is a major open question and therefore, the main strength of our work is the conceptual message that the despite the lack of recent progress, previous results are not optimal. Thus we hope our work will inspire future research in this direction. > Do you see any potential barriers to this approach? A discussion on what the limits are (is the bottleneck the heavy hitters algorithm or the tail-estimation algorithm?) would be greatly appreciated. This will also give the reader the assurance that this minor improvement is still significant as it pushes a new idea to a reasonable extent. The relatively larger space used by the deterministic heavy-hitter algorithm compared to the space-optimal randomized heavy-hitter algorithms is a major bottleneck on further improving the bounds for this approach. One hope would be to improve the space usage of the deterministic heavy-hitter algorithm. However, it is known that the existing bounds are optimal, i.e., there are no deterministic heavy-hitter algorithms that use less space. Thus, the current analysis we use is tight for this approach and any further improvements likely require significantly new or different algorithmic designs. --- Rebuttal Comment 1.1: Comment: Thank you for the response.
Summary: The work develops new algorithms in the adversarially robust streaming model. In this model, the algorithm observes updates to some vector arrive in the form of a data stream and maintain estimates to some property of the vector as it changes. At each time-step i, the algorithm receives the update of the vector at this time-step. Specifically, it receives an update of the form $(i, \Delta_i)$ that means that the $i$-th coordinate of the vector changes by $\Delta_i$. Also, at every time-step i the algorithm outputs an estimate of some property of this vector. The work focuses on the adversarially robust setting, where the updates in subsequent time-steps can potentially depend on the estimates that the algorithm gave in previous time-steps. This is an important property of the streaming algorithm, because when decisions are made based on the estimates given by a streaming algorithm this will likely influence the data received by this algorithm in the future. At the same time, notably, many previously studied streaming algorithms are not adversarially robust. Additionally, the paper focuses on the turnstile setting, i.e. the updates $\Delta_i$ to the vector can be both positive and negative. The length of the data stream is denoted by $m$, and in the adversarially robust streaming setting the paper mainly studies the problems of $L_p$-heavy hitter estimation and the problem of estimating the $L_p$ norm of the vector. The problem of adversarially robust $L_p$-heavy-hitter estimation requires the streaming algorithm maintain a list of elements that contribute a lot to the $L_p$ norm of the vector. Formally, it requires the list to contain all elements $i$ for which $|x_i| \geq \epsilon ||x||_p$ and all elements $j$ in the list should satisfy $|x_j| \geq \epsilon/2 ||x||_p$. Up to constantants and logarithmic factors in m and $\epsilon$, the $L_p$-heavy-hitter estimation algorithm requires $1/\epsilon^2.5 * m^\alpha$, where $\alpha=(2p-2)/(4p-3)$. This improves on the previous algorithm from the literature that achieves $\alpha=p/(2p+1)$. This constitutes an improvement for all $p$ in $[1,2)$, with the most pronounced improvement at $p=1$ where the space usage is improved from polynomial in $m$ to polylogarithmic. The work also presents new algorithms for estimating the $L_p$-norm of the underlying vector (up to a multiplicative factor of $1+\epsilon$). Up to polylogarithmic factors and constants, the amount of space used by the algorithm is of the form $m^c poly(1/\epsilon)$. The exact dependence of $c$ on the value of $p$ is $c=(24p^2-23p+4)/((4p-3)(12p+3))$. As the paper notes, for $p=1.5$ we have $c=0.373$ whereas the best previous algorithm has $c=0.375$. The work improves on the dense-sparse decomposition method of the previous work. Strengths: - The adversarially robust streaming framework is natural and well-motivated. - For the L_1-heavy-hitter problem the work gives qualitative improvement on the previous algorithms by improving the space use from polynomial in the stream length to polylogarithmic. Weaknesses: - Other than the $L_1$-heavy hitter estimation, many of the improvements are extremely incremental, for instance the aforementioned improvement of the dependence on the length $m$ of the stream for $L_{1.5}$-norm estimation to $m^{0.373}$ from $m^{0.375}$. - Other than the results for L_1-heavy-hitter estimation, my understanding is that there is no evidence of optimality for the results given in this work. For example, to the best of my understanding, it can conceivably be the case that the above-mentioned dependence of $m^{0.373}$ could in the future be improved to polylog$(m)$. If this happens, the impact of this work could potentially be small. Technical Quality: 4 Clarity: 2 Questions for Authors: In subsection 1.1 what is the relationship between vectors $f$ and $x$? In the rest of the paper $f$ is used to denote the frequency vector of the stream, whereas the vector $x$ does not appear to be invoked anywhere again. In any case, it would be helpful if the vectors $x$ and $f$ are clearly defined in the beginning of subsection 1.1. Subsection 1.1. would be easier to read if the paper used \paragraph to separate the parts of the subsection that address different problems. Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: It would be helpful if the authors added a dedicated paragraph in the paper that indicated what are the main limitations of the work according to the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Other than the $L_1$-heavy hitter estimation, many of the improvements are extremely incremental, for instance the aforementioned improvement of the dependence on the length $m$ of the stream for $L_{1.5}$-norm estimation to $m^{0.373}$ from $m^{0.375}$. > Other than the results for L_1-heavy-hitter estimation, my understanding is that there is no evidence of optimality for the results given in this work. For example, to the best of my understanding, it can conceivably be the case that the above-mentioned dependence of $m^{0.373}$ could in the future be improved to polylog$(m)$. If this happens, the impact of this work could potentially be small. We do agree that in some settings, the quantitative improvement is not large. However, we emphasize that optimal bounds for adversarial insertion-deletion streams is a major open question and therefore, the main strength of our work is the conceptual message that the despite the lack of recent progress, previous results are not optimal. Thus we hope our work will inspire future research in this direction. > In subsection 1.1 what is the relationship between vectors $f$ and $x$? In the rest of the paper $f$ is used to denote the frequency vector of the stream, whereas the vector $x$ does not appear to be invoked anywhere again. In any case, it would be helpful if the vectors $x$ and $f$ are clearly defined in the beginning of subsection 1.1. Due to the problems of heavy-hitters and moment estimation, we use both $f$ and $x$ to denote the same underlying vector. We will unify this notation with the same symbol in the updated version. > Subsection 1.1. would be easier to read if the paper used \paragraph to separate the parts of the subsection that address different problems. Thanks for the suggestion, we will add paragraph notation to demarcate the discussion on heavy-hitters and norm estimation in the updated version. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Tentatively, I keep my rating at 6: Weak Accept
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments and valuable insight. We also appreciate the positive feedback, such as: - The adversarially robust streaming framework is natural and well-motivated. (Reviewer KENe) - For the L_1-heavy-hitter problem the work gives qualitative improvement on the previous algorithms by improving the space use from polynomial in the stream length to polylogarithmic. (Reviewer KENe) - As the paper mentions, this is a conceptual breakthrough, showing that the previous bound of $\tilde{O}(m^{p/2p+1})$ is not tight. (Reviewer pbML) - The paper explains all the ideas pretty well, and it was easy to read. (Reviewer pbML) - The authors are able to effectively handle the had case of [BEO22], resulting in a better improvement. (Reviewer pbML) - Insightful contribution to a very active research area. (Reviewer VAPi) - Non-trivial improvement for some of the most popular streaming problems: heavy hitters and moments. The combination of different ideas to make this work is not easy. (Reviewer VAPi) - The authors do a great job of situating themselves in prior work, and explaining exactly how their results differ and improve. (Reviewer B1cC) - The results seem correct and were clearly explained and decomposed. (Reviewer B1cC) We provide specific responses to the initial comments of each reviewer below. We hope our answers addresses all points raised by the reviewers and we will be most happy to answer any remaining or additional questions during the discussion phase!
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering
Accept (poster)
Summary: This paper introduces G-Retriever, which combines LLMs, GNNs and RAG for graph question-answering tasks. The authors first develop a more comprehensive benchmark named GraphQA. Then they present G-Retriever, which has four main steps including indexing, retrieval, subgraph construction and answer generation. Specifically, G-Retriever use an adapted PCST method, which also consider the importance of the edge semantics. The constructed subgraph and the query are then sent to the LLM for final answer generation. Extensive experiments validate the effectiveness and efficiency of G-Retriever. Strengths: 1. The introduction of the GraphQA benchmark fills a significant gap in the research community by providing a comprehensive benchmark for evaluating graph QA applications. 2. The motivation of the proposed G-Retriever is clear and the paper is well-structured. 3. Extensive experimental results demonstrate that G-Retriever can consistently outperform baseline models. 4. Codes are provided for reproducibility. Weaknesses: - In Section 5.1 (Indexing), the authors use a pre-trained LM to encode the text attributes of nodes and edges into representations. However, there is no ablation study to demonstrate the importance of the node attributes or the text attributes, which leaves an unexplored gap in understanding the contribution of these features to the overall performance of the G-Retriever. - In Section 5.3 (Subgraph Construction), the authors use PCST to obtain a more refined subgraph. However, the motivation for selecting PCST is not clearly explained. Is PCST the optimal for this task? How the "optimally sized and relevant subgraphs" are defined? At least the authors could consider to justify this by comparing some naive approaches, such as fixed number or fixed similarity threshold. - The claim of the new benchmark is somewhat weak, as it primarily involves the reintroduction of an existing dataset. Additionally, the contributions related to the QA formulation and graph reformatting are limited. - The evaluation of hallucinations lacks sufficient detail and requires further elaboration. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the questions in the Weakness section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of their work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for the careful reading and comments regarding our work. Please, see below our answer to the raised comments/questions. > **Reviewer:** In Section 5.1 (Indexing), the authors use a pre-trained LM to encode the text attributes of nodes and edges into representations. However, there is no ablation study to demonstrate the importance of the node attributes or the text attributes, which leaves an unexplored gap in understanding the contribution of these features to the overall performance of the G-Retriever. **Authors:** The text attributes in G-Retriever play a critical role in three main areas: (1) Retrieval: Node and edge embeddings are used to select the top-k nodes and edges based on their semantic similarity to the query embedding. (2) GNN Input: These embeddings are fed into the GNN, which processes the graph structure and refines the embeddings to capture more complex relationships within the graph. (3) LLM Input: The retrieved subgraph is textualized and used as input for the LLM to generate answer. To address the reviewer’s concern, we have conducted an ablation study to assess the contribution of these features. | Features | Without Node | Without Edge | |----------|--------------|--------------| | Retrieval | 66.58 | 58.37 | | GNN Input | 68.85 | 67.87 | | LLM Input | 56.32 | 68.24 | > **Reviewer:** In Section 5.3 (Subgraph Construction), the authors use PCST to obtain a more refined subgraph. However, the motivation for selecting PCST is not clearly explained. Is PCST the optimal for this task? How the "optimally sized and relevant subgraphs" are defined? At least the authors could consider to justify this by comparing some naive approaches, such as fixed number or fixed similarity threshold. **Authors:** Due to the character limit, we kindly refer the reviewer to our general response “G2: Elaboration on PCST-Based Retrieval”, where we have addressed this question in detail. > **Reviewer:** The claim of the new benchmark is somewhat weak, as it primarily involves the reintroduction of an existing dataset. Additionally, the contributions related to the QA formulation and graph reformatting are limited. **Authors:** Due to the character limit, we kindly refer the reviewer to our general response “G1. Contribution of the GraphQA Benchmark”, where we have addressed this question in detail. > **Reviewers:** The evaluation of hallucinations lacks sufficient detail and requires further elaboration. **Authors:** Thank you for your feedback. Due to space constraints, the detailed evaluation of hallucinations is provided in Appendix F. Here, we offer a summary: - **Experiment Design.** We instructed the LLM to answer graph-related questions and list the nodes or edges in the explanation graph that support its answers. Since there are no standard answers, evaluating the LLM’s responses becomes challenging. To address this, we manually examined 100 responses generated by both our method and a baseline method (LLM with graph prompt tuning) to verify whether the referenced nodes and edges actually exist in the graph. - **Evaluation Metrics.** We assessed the model’s faithfulness using three metrics: the fraction of valid nodes (Valid Nodes), the fraction of valid edges (Valid Edges), and the fraction of times the entire set of nodes and edges cited was valid (Fully Valid Graphs). - **Baseline.** We adapted MiniGPT-4 [57] to graph contexts as our baseline, using a frozen LLM with a trainable GNN that encodes graph data as a soft prompt (LLM+Graph Prompt Tuning). We focused on graph prompt tuning due to the large size of the textual representation of the graph, which often exceeds the input token limits of LLMs. - **Results.** As shown in Table 5, G-Retriever outperforms the baseline in reducing hallucinations. The baseline method showed only 31% valid nodes, 12% valid edges, and 8% fully valid node-edge sets. In contrast, G-Retriever achieved 77% validity in nodes, 76% in edges, and 62% overall validity in node-edge sets. These results highlight the effectiveness of G-Retriever in accurately referencing graph elements and significantly reducing hallucinations. We will ensure that these points are clearly elaborated upon in the revised manuscript for better understanding. --- Rebuttal Comment 1.1: Title: Thank you for your response. Comment: Thank you for your response. I am satisfied with the clarification and will maintain my positive review score. --- Rebuttal 2: Title: Response to Reviewer RHZZ Comment: Thank you very much for your time reviewing our answer.
Summary: This paper proposes a retrieval-augmented method G-Retriever for graphs with textual attributes. It introduces a graph question answering (GraphQA) benchmark by converting existing graph datasets into uniform format. Subsequently, it proposes a G-Retriever method to answer questions related to the textual graphs. Specifically, it first retrieves top-k most relevant nodes and edges from the textual graphs and then constructs a subgraph based on the retrieved nodes and edges using an existing algorithm called Prize-Collecting Steiner Tree (PCST). Subsequently, G-Retriever leverages an GAT model to obtain the pooled representation of the constructed subgraph, which is used as a soft prompt for an LLM to generate the answer. Strengths: The idea of retrieving subgraphs from textual graphs to augment the LLM is interesting. Experimental results on three datasets in the GraphQA benchmark demonstrate the effectiveness of the proposed G-Retriever model. The code is provided for reproducibility. Weaknesses: 1. The paper mentions “chat with their graph” in the abstract and the introduction. However, it is not very clear what this concept means. Additionally, after the abstract and introduction, there are no further introductions or explanations about this concept. 2. In line 69, the paper states that the proposed G-Retriever can improve the explainability. However, there are no empirical analyses regarding the explainability of the proposed method. 3. The novelty of the proposed GraphQA benchmark seems limited, as it only converts three existing graph datasets to a uniform format. 4. I think the main novelty of the proposed G-Retriever model is that it leverages a PCST algorithm to find a connected subgraph between the retrieved nodes and edges. However, the paper does not clearly explain the advantages of retrieving a subgraph. Why is it necessary to retrieve a subgraph rather than directly using the texts of the retrieved nodes and edges as inputs for the LLM? Constructing a subgraph could potentially introduce noises, which might negatively impact the performance. Additionally, the paper does not adequately justify the use of PCST for finding the subgraph. Have the authors explored other alternative methods, such as the shortest paths between the retrieved nodes? 5. There are no quantitative or qualitative analyses regarding the quality of the retrieved subgraphs. Providing such analyses could offer some insights into the effectiveness of the subgraphs. 6. In the efficiency evaluation (section 6.3), the paper only compares the training times of G-Retriever and its variant without retrieval (I assume this is what “Before Retrieval” means, as there are no explanations about the columns in Table 4). However, it would be more appropriate to compare the inference times of G-Retriever and its variant during the inference stage. G-Retriever requires additional steps of retrieving nodes and edges, as well as constructing subgraphs using the PCST algorithm, which may result in longer inference times compared to not using retrieval. 7. A lot of necessary experimental details in section 6.4 and 6.5 are provided in the Appendix, which may hinder the readers’ understanding of the results and analyses in these sections. For instance, it is impossible to understand what “Valid Nodes”, “Valid Edges” and “Fully Valid Graphs” mean in Table 5 without referring to the Appendix. Additionally, in section 6.4, the paper uses manual examination to evaluate the hallucination performance of LLMs. However, it is unclear how many annotators were involved in this process and whether there is a potential for biases during the evaluation. 8. The texts in Figure 1 and Figure 2 are too small to read. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What does “chat with their graph” mean and how can the proposed G-Retriever model achieve this purpose? 2. How can the proposed G-Retriever improve the explainability? Are there any qualitative analyses to support this argument? 3. What are the advantages of using the PCST algorithm to retrieve subgraphs? 4. How to evaluate the quality of the retrieved subgraphs? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitations of their work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the reviewer very much for the careful reading and comments regarding our work. Please, see below our answer to the raised comments/questions. > **Reviewer (W1 & Q1):** Explanation on “chat with their graph” **Authors:** By “chat with their graph,” we mean that users can interact with the graph through a conversational interface. Users can input a textual graph and pose natural language queries, to which G-Retriever will respond in natural language. For example, as shown in Figure 1, if a user provides a mindmap-like explanation graph and requests an argument essay, G-Retriever can generate the essay accordingly. This feature enhances human-AI interaction, making the model more intuitive and aligned with human expectations. We will clarify and elaborate on this concept in the main sections of our revised manuscript. > **Reviewer (W2 & Q2):** Explainability of G-Retriever **Authors:** We believe G-Retriever enhances explainability in the following ways: - **Retrieved subgraph.** By returning the most relevant subgraph in response to a query, users can see which parts of the graph are considered important for the answer. This helps users understand the basis of the model’s responses. For example, if users want to understand why certain information is present or absent in the LLM’s response, they can inspect the subgraph to see whether such information is present or absent in the retrieved subgraph. - **Conversational Interface.** G-Retriever allows users to ask follow-up questions and receive detailed natural language explanations. For example, if a user questions the LLM’s response, they can ask, “Why do you think [xxx]? Please explain your answer.” This interactive capability enables users to explore the model’s reasoning process and gain deeper insights into how it interprets graph data. We will include specific examples in the revised version of our manuscript to illustrate the two explainability properties mentioned above. > **Reviewer (W3):** Novelty of GraphQA benchmark **Authors:** Due to the character limit, we kindly refer the reviewer to our general response “G1: Contribution of the GraphQA Benchmark”, where we have addressed this question in detail. > **Reviewer (W4 & Q3):** PCST and alternative retrieval methods **Authors:** Due to the character limit, we kindly refer the reviewer to our general response “G2: Elaboration on PCST-Based Retrieval”, where we have responded to this question in detail. > **Retriever (W5 & Q4):** Quality of the retrieved subgraphs **Authors:** We quantify the quality of our retrieval method as follows: We examine the retrieval subgraph; if the label is contained within it, we consider it a successful retrieval. We calculate the retrieval success rate of our method and the retrieval method proposed in KAPING [1] on the WebQSP dataset. The results are as follows: - Hit@1 accuracy for [KAPING with top-k triple retrieval] is 60.81%. - Hit@1 accuracy for [Ours with PCST-based subgraph retrieval] is 67.87%. This demonstrates the effectiveness of our method. > **Reviewer (W6):** Efficiency evaluation **Authors:** We conducted additional experiments on the WebQSP dataset comparing inference times of G-Retriever with and without retrieval: | Method | Time in minutes | Accuray with Hit@1 | |------------------------------|----------|--------| | G-Retriever | 9.55 | 70.49 | | G-Retriever (w/o retrieval) | 21.01 | 63.84 | Despite additional steps, G-Retriever is faster (9.55 vs. 21.01 minutes) and more accurate (70.49% vs. 63.84%). The speedup is due to: - PCST subgraph construction is very efficient. For instance, in the WebQSP dataset, even the largest subgraph (2,000 nodes, 6,104 edges) takes only 0.29 seconds to construct. - PCST retrieval reduces the graph size significantly, eliminating up to 99% of nodes in the WebQSP dataset, which in turn speeds up inference time. > **Reviewer (W7):** Experimental details **Authors:** In Table 5, we assessed the model’s faithfulness using three metrics: the fraction of valid nodes (denoted as Valid Nodes), the fraction of valid edges (denoted as Valid Edges), and the fraction of times the entire set of nodes and edges cited was valid (denoted as Fully Valid Graphs). As recommended, we will update the table captions in our revised manuscript to include this information. Regarding the hallucination evaluation in section 6.4, the manual examination was conducted by the authors. We will develop the following points: **1. Necessity of Manual Evaluation:** Since standard answers are not available for these questions, the LLM’s responses are inherently flexible. This flexibility, along with the varied output formatting, makes automatic evaluation difficult, necessitating manual review. **2. Objectivity in Determining Hallucinations:** The determination of hallucinations is based on objective criteria. For example, in Table 1, in the response from “LLM w/ Graph Prompt Tuning,” it is stated: “ The animal in the bushes is a deer. Nodes: * Deer (node 1), * Bushes (node 2); Edges: * Deer → Bushes (edge 1) * Deer → Grass (edge 2) * Bushes → Grass (edge 3)”. The fraction of “valid nodes” is 1/2 = 0.5, as “Bushes” exists in the scene graph while “Deer” does not. The fraction of “valid edges” is 0/3 = 0, since none of the mentioned edges exist in the scene graph. As there are hallucinated nodes and edges, this is not a “fully valid graph.” **3. Transparency and Bias Mitigation:** We recognize the importance of transparency and minimizing bias in our evaluation process. To address potential concerns, we will open-source the model outputs used for these manual checks, allowing for public scrutiny and verification. > **Reviewer (W8):** The texts in Figure 1 and Figure 2 are too small to read. **Authors:** We will enlarge the text in Figures 1 and Figure 2 to enhance readability in our next manuscript. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses, which address many of my concerns. I am satisfied with the clarifications regarding the PCST algorithm, the analyses of retrieved subgraphs and the efficiency. However, as noted by the authors, the paper could be further improved by providing more details in the following areas: (1) "chat with their graph" concept: further explanation and examples are needed; (2) Explainability of G-Retriever: empirical analyses to support this claim should be included; (3) Experimental Details: necessary experimental details should be provided in the main text to facilitate the understanding of the results. Therefore, I have updated my score accordingly. --- Reply to Comment 1.1.1: Title: Response to Reviewer GQPK Comment: Thank you very much for your time reviewing our answer, and for updating your score. We appreciate your suggestions and will ensure to address the areas you've highlighted, including further details on the "chat with their graph" concept, explainability analyses, and necessary experimental details in the revised manuscript.
Summary: This paper proposed a Graph Question Answering (GraphQA) benchmark with data collected from different tasks including ExplaGraphs, SceneGraphs, and WebQSP. Then, they proposed G-Retriever method, introducing the first retrieval-augmented generation (RAG) approach for general textual graphs, which can be fine-tuned to enhance graph understanding via soft prompting. G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem. The generation model is fine-tuned by a graph token and the textualized graph. Empirical evaluations show G-Retriever outperforms baselines on textual graph tasks from multiple domains, scales well with larger graph sizes. Strengths: - This proposed GraphQA benchmark is comprehensive and comes in a timely manner. - Converting the problem of finding a connected subgraph that maximizes the total prize values of its nodes while minimizing the total costs of its edges to PCST is smart. - Strong experimental results by using the proposed G-retriever on all three datasets. Weaknesses: I don't see major weaknesses of this paper. Several minor points: - I think several baselines are missing: (1) Simply feed the retrieved top-k nodes/edges (plus its neighbors) to the LLM would be able to show the effectiveness of PCST. (2) It would be helpful to understand the generation model by replacing the tuned generation model to a fixed LLM like GPT4, Claude, etc. - When encoding the nodes and edges, the context of the nodes/edges is missing, it could be beneficial to include the context (e.g., adjacent nodes) - Table captions should be more informative so that they are easier to understand (e.g., In Table5, what metrics are used?) Technical Quality: 4 Clarity: 3 Questions for Authors: n/a Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like thank the reviewer very much for the careful reading and comments regarding our work. Please, see below our answer to the raised comments/questions. > **Reviewer:** I think several baselines are missing: (1) Simply feed the retrieved top-k nodes/edges (plus its neighbors) to the LLM would be able to show the effectiveness of PCST. **Authors:** To demonstrate the effectiveness of PCST, we compared it to a similar baseline, KAPING [1]. KAPING retrieves the top-k triples related to the question from the graph, adds them to the input question as a prompt, and then sends this to LLMs to generate the answer. In addition to KAPING, we included the following baseline methods to address the reviewer's concern: - Top-k nodes plus their neighbors: For each query, For each query, the top-k nodes and their one-hop neighbors are retrieved. - Shortest path retrieval: This approach involves retrieving the top-k nodes and the shortest paths between them. For all methods, we set k = 5 and used llama2-7b-chat as the LLM. The results are presented in the table below, where we observed that our PCST-based retrieval outperforms the baseline retrieval methods, achieving an accuracy of 66.17 on the WebQSP dataset. | Method | Hit@1 | |--------------------------------------|--------------------| | PCST retrieval | 66.17 | | top-k triples retrieval (KAPING) | 52.64 | | top-k nodes plus its neighbors | 49.82 | | shortest path retrieval | 55.20 | > **Reviewer:** (2) It would be helpful to understand the generation model by replacing the tuned generation model to a fixed LLM like GPT4, Claude, etc. **Authors:** To address this concern, we conducted additional experiments by replacing the tuned generation model with fixed LLMs on the WebQSP dataset. Specifically, we first applied the PCST retrieval to obtain the subgraph, then converted it into text and fed it into the fixed LLMs along with the query. We considered two fixed LLMs: the open-source llama2-7b-chat and the closed-source GPT-4. The results are summarized in the table below: | Method | Hit@1 | |---------------------|--------------------| | llama2-7b-chat | 66.17 | | GPT-4o | 67.87 | | G-Retriever | 70.49 | As the results indicate, G-Retriever outperforms both fixed LLM baselines. This demonstrates the effectiveness of our approach in the tuned generation model. > **Reviewer:** When encoding the nodes and edges, the context of the nodes/edges is missing, it could be beneficial to include the context (e.g., adjacent nodes) **Authors:** Thank you for your observation. In our indexing step, we use a pre-trained language model (LM) to encode the text attributes associated with each node and edge into embeddings. Although we do not explicitly include the context, such as adjacent nodes, during this initial encoding, we address this in the generation step. Specifically, our framework incorporates a graph neural network (GNN) component designed to aggregate information from adjacent nodes and update the node and edge representations based on the graph context. This approach allows us to effectively capture and use the context during the generation phase. Therefore, while the context is not directly included in the initial encoding, it is integrated later in the process through the GNN, ensuring that the contextual information is considered in the overall framework. > **Reviewer:** Table captions should be more informative so that they are easier to understand (e.g., In Table5, what metrics are used?) **Authors:** Thank you for the suggestion. In Table 5, we assessed the model’s faithfulness using three metrics: the fraction of valid nodes (denoted as Valid Nodes), the fraction of valid edges (denoted as Valid Edges), and the fraction of times the entire set of nodes and edges cited was valid (denoted as Fully Valid Graphs). As recommended, we will update the table captions in our revised manuscript to include this information. **Reference** [1] Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering, 2023. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thank you for your response. I think adding those comparison does make the argument more solid and I will maintain my rating. --- Reply to Comment 1.1.1: Title: Response to Reviewer mCtm Comment: Thank you very much for your time reviewing our answer. We will ensure to include these comparisons in our revised manuscript.
Summary: The work builds a new GraphQA benchmark for real-world graph question answering and presents G-Retriever, an architecture adept at complex and creative queries. Given a query and a graph, G-Retriever retrieves a connected subgraph from the original graph according to the query. The subgraph, a textualized graph, and the query are fed into an architecture consisting of a graph encoder aligned with an LLM. Experiments show that G-Retriever achieves SoTA in textual graph tasks across multiple domains, significantly improves efficiency, and demonstrates resistance to hallucination. Strengths: 1. The idea of incorporating retrieval for graph question answering is novel and interesting and could be inspiring for future works. 2. The proposed G-retriever framework performs well on the GraphQA dataset. The retrieval method scales effectively with larger graph sizes, which is not addressed in many previous works. Locating the related subgraph instead of using the entire graph also reduces hallucination. 3. The experiments are comprehensive, including analysis from various aspects such as hallucination and graph encoder selection. Weaknesses: 1. The framework seems to mainly work for larger graphs. For tasks with smaller graphs, such as the ExplaGraphs dataset with an average node number of 5.17, it is unnecessary to use such a framework. The performances of G-retrieval and GraphToken are nearly the same for the ExplaGraphs dataset. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. After the subgraph is retrieved, is it possible to use neuro-symbolic methods (e.g. answer set programming or graph search) instead of using a graph encoder together with an LLM? It would be more efficient and cost much less. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for the careful reading and comments regarding our work. Please, see below our answer to the raised comments/questions. > **Reviewer:** The framework seems to mainly work for larger graphs. For tasks with smaller graphs, such as the ExplaGraphs dataset with an average node number of 5.17, it is unnecessary to use such a framework. The performances of G-retrieval and GraphToken are nearly the same for the ExplaGraphs dataset. **Authors:** We acknowledge that G-Retriever and GraphToken demonstrate similar performance on the ExplaGraphs dataset, which consists of small graphs. As noted by the reviewer, other techniques like GraphToken can be employed effectively on this dataset. However, our aim in applying G-Retriever to ExplaGraphs dataset was to highlight the flexibility and effectiveness of the proposed technique across a wide range of graph sizes. This versatility is particularly advantageous in datasets like SceneGraphs, where graph sizes, although small (averaging 19.1 nodes) and varying significantly (ranging from 1 to 126 nodes with a standard deviation of 8.3). For these small-scale graphs, G-Retriever significantly outperforms GraphToken, achieving 0.8131 accuracy compared to 0.4903, as reported in Table 5. Additionally, G-Retriever offers extra benefits, such as a flexible conversational interface. As shown in Figure 1, G-Retriever can generate an essay based on a given explanation graph. We believe that this versatile question-answering capability adds substantial value beyond just accuracy. | | # node: min, max, mean ± std | # edge: min, max, mean ± std| | -------- | ------- | ------- | | ExplaGraphs | 4, 9, 5.17 ± 1.18 | 3, 8, 4.25 ± 1.23 | | SceneGraphs | 1, 126, 19.13 ± 8.28 | 0, 1657, 68.44 ± 65.77 | | WebQSP | 3, 2000, 1371.18 ± 566.91 | 2, 10818, 4253.27 ± 2238.12 | > **Reviewer:** After the subgraph is retrieved, is it possible to use neuro-symbolic methods (e.g. answer set programming or graph search) instead of using a graph encoder together with an LLM? It would be more efficient and cost much less. **Authors:** Thank you for the insightful suggestion. While neuro-symbolic methods such as answer set programming or graph search can indeed offer efficiency and cost-effectiveness, they generally require a predefined specification of patterns, i.e. symbols, to guide the search process. In our graph QA scenario, the patterns are highly dependent on the query, which can vary significantly in complexity, making it challenging to effectively predefine these patterns. However, we acknowledge that for specific applications where patterns can be clearly defined, neuro-symbolic approaches could be a viable alternative and are certainly worth exploring in future work. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks for the response. My concerns are well addressed. Overall, I think this work makes a good contribution to the area. I will improve my score to 7. --- Reply to Comment 1.1.1: Title: Response to Reviewer GHV4 Comment: Thank you very much for your time reviewing our answer, and for updating your score.
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their time and effort in evaluating our paper. In this global response, we aim to clarify the contributions of the GraphQA benchmark (G1), elaborate on our PCST-based retrieval method (G2), and present new experiments conducted in response to the reviewers' feedback (G3). **G1. Contribution of the GraphQA Benchmark:** We acknowledge that the GraphQA benchmark involves converting three existing graph datasets into a uniform format. However, we believe this standardization provides significant value to the research community in several ways: - **Task Introduction:** Unlike existing graph question-answering benchmarks that focus on small or synthetic graphs, our benchmark includes real-world applications and frames them as graph question-answering tasks. - **Standardization:** A key and significant effort of this benchmark is the standardization and processing of diverse datasets into a uniform format suitable for GraphQA tasks. These datasets, previously used in different contexts, are redesigned to focus specifically on GraphQA, ensuring consistent and comparable evaluations across models. - **Accessibility:** We have open-sourced the GraphQA benchmark, providing a unified format that simplifies model application across multiple datasets. This reduces the complexity of handling various data structures and preprocessing pipelines, lowering barriers for new researchers and encouraging broader participation. We have already seen several novel works using our GraphQA benchmark, and we expect rapid adoption within the LLM and GNN communities. - **Baseline Comparisons:** The benchmark offers baseline performance metrics, helping researchers identify the strengths and weaknesses of new approaches compared to established baselines. We will ensure these contributions are more clearly highlighted in the revised manuscript. **G2. Elaboration on PCST-Based Retrieval:** **1) Modeling motivation.** We formulate subgraph retrieval as a Prize-Collecting Steiner Tree (PCST) optimization problem. This is motivated by the need to find a connected subgraph containing most relevant nodes and edges, a goal that aligns well with the objectives of PCST: maximizing node values while minimizing edge costs. Though not universally acknowledged as optimal, we have empirically validated its effectiveness. **2) Effectiveness Compared to Other Retrieval Baselines.** To further demonstrate the effectiveness of PCST-based retrieval, we compared it to a similar baseline, KAPING [1]. KAPING retrieves the top-k triples related to the question from the graph, adds them to the input question as a prompt, and then sends this to LLMs to generate the answer. In addition to KAPING, we included the following baseline methods to address the reviewers' concern: - Top-k nodes plus their neighbors: For each query, For each query, the top-k nodes and their one-hop neighbors are retrieved. - Shortest path retrieval: This approach involves retrieving the top-k nodes and the shortest paths between them. For all methods, we set k = 5 and used llama2-7b-chat as the LLM. The results are presented in the table below, where we observed that our PCST-based retrieval outperforms the baseline retrieval methods, achieving an accuracy of 66.17 on the WebQSP dataset. | Method | Hit@1 | |--------------------------------------|--------------------| | PCST retrieval | 66.17 | | top-k triples retrieval (KAPING) | 52.64 | | top-k nodes plus its neighbors | 49.82 | | shortest path retrieval | 55.20 | **3) Advantages of Subgraph-Based Retrieval.** - **Context-Relevant.** Selecting nodes and edges in isolation may overlook neighborhood information. In contrast, PCST-based retrieval is guaranteed to return a connected subgraph, capturing the graph context during the retrieval process. This approach retrieves not only high-relevance nodes or edges but also “bridge” elements that connect these with contextually significant nodes or edges, which are crucial for generating a comprehensive response. - **Size Management.** Compared to the shortest path method, PCST retrieval provides greater control over the size of the retrieved subgraph. By adjusting the prizes and costs on nodes and edges, users can fine-tune the subgraph's extent. In contrast, the shortest path approach lacks the ability to control the distance between the top-k nodes, which can lead to disconnected subgraphs or the inclusion of unnecessarily long paths. **G3. New experiments:** In response to reviewers' feedback, we conducted several new experiments to address their concerns: - Comparison with more retrieval baselines, demonstrating the effectiveness of our PCST-based retrieval method (Table 16). - Replacing the tuned generation model in G-Retriever with a fixed LLM (Table 17), highlighting the necessity of tuning the generation model, particularly through graph soft prompting in G-Retriever. - Efficiency evaluation during the inference stage (Table 18), showing the significant efficiency gains brought by the retrieval process. - Quantifying the quality of our retrieval method (Table 19) shows that our approach retrieves more accurate information than the KAPING baseline. - Ablation study to demonstrate the importance of node and edge attributes (Table 20). Please, find the new experiments in the newly rebuttal single-page PDF file. We hope that our point-to-point responses with the new experiments have clarified the reviewers’ concerns. We are happy to answer any additional questions to clarify further. Thank you again for your dedicated effort and time to review the paper. Best regards, The authors **References** [1] Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering, 2023. Pdf: /pdf/a20ca954410e31d1d1dd0d67d73f830190f2edda.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach
Accept (poster)
Summary: This paper presents a method to assess the extent to which social biases are encoded in LLMs. The motivating example throughout the paper is assessing the degree of gender bias in the context of job applications in the labor market. The method (PRISM) is designed as follows: given text (presumably output by the LLM), each word is masked out, one at a time, and the masked word is predicted using a masked language model. However, the only masked words predicted belong to one of two lists, corresponding to the two social groups (e.g. a list of masculine terms and a list of feminine terms). If the probability of words in one list are consistently larger than that of the other list, there is evidence for social bias. The paper then uses PRISM to assess the social biases of chatGPT. chatGPT is prompted to produce job applications, and the method analyzes bias across four social dimensions relating to gender. The paper finds that biases in job postings are likely reproduced and even exacerbated in job applications generated by chatGPT. Strengths: Strengths: - Important problem: the paper addresses the important problem of detecting and understanding social biases. The increasing adoption of LLMs by both employers and job seekers makes this a relevant issue. The potential of these models to amplify biases underscores the importance of studying this problem. - Intuitive method: PRISM is an intuitive method that's easy to understand and explain. Moreover, it's flexible and efficient. All that's needed is word lists and an MLM to use for evaluation. - Writing: The writing is clear throughout the paper, which helps make the method easy to understand. Weaknesses: The biggest weakness of the paper is the lack of evaluation. A new method is proposed for assessing social bias in LLMs, but only a small portion of the paper is dedicated to validating the new algorithm. While the validation exercises are a good start, they're limited; for example, the human experts validation is just limited to one dataset, with 6 human labelers and <100 examples. To see evidence that this is a general method, we need to see robustness across multiple application areas and experiments. Related to the above, I can imagine a problem of this approach where biases of the masked language model are instead incorrectly attributed to the LLM generating the text. For example, if the masked language model used is biased (e.g. is likelier to predict male words than female words), then any sequence of text PRISM is evaluated on will be viewed as biased. It seems that for this test to be reliable, the masked language model should form calibrated and "unbiased" predictions of its own. Couldn't this be revealing biases in BERT rather than the LLM? I'm not sure how likely this is, but it warrants discussion and validation in the paper; otherwise the limited evaluation is not compelling. An additional weakness is that the experiments are limited. The experiments don't include baselines for assessing social bias. Is there evidence that PRISM is picking up on biases that other methods miss? Additionally, only one LLM is considered in the job posting experiment, and one MLM (BERT) is used as the MLM -- ablating the latter could help demonstrate the robustness of the method. A little less urgent, it would've been helpful to see more qualitative examples of the postings, along with more details for the experiments (e.g. the full word lists or at least the lengths of each word list). Technical Quality: 2 Clarity: 4 Questions for Authors: See section above, namely: - Is there evidence that the test isn't susceptible to social biases in masked language models? - What does PRISM find that isn't found by other methods? - Is there more evaluation/validation that the method is useful? Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the detailed and thorough comments. Below please find our responses to specific comments: --- W1: We acknowledge and face challenges in evaluating social bias in text: 1. The evaluation benchmarks in this field are scarce and underdeveloped so we can only turn to human labeling. This is also the key motivation for developing our method, which aims to provide a reference bias score for a field lacking domain-specific labeled data. 1. Compared to related papers in this field, our evaluation is more robust. Many existing evaluations only rely on heuristics, such as counting gendered words like “he/she” [3,8]. 1. Evaluating gender bias is particularly challenging due to its subtle and multifaceted nature. This complexity makes it difficult to use crowdsourcing effectively, leading us to rely on expert labeling. Our paper significantly advances existing research by providing expert validation across multiple dimensions of gendered language in the labor market context. While not perfect, our approach represents the first of its kind to provide such comprehensive expert validation, addressing a critical gap in the literature. Specifically, we have used the following strategies for validation: 1. For human evaluation, we work with experts in related fields (sociology, psychology, management, human resources), with an iterative multi-round coding process. This ensures high labeling quality, especially compared to non-expert crowdsourcing workers (who may be less attuned to the intricacies of labor market dynamics). We have provided more details about the expert labeling process in the global response. 2. In addition to human evaluation, we also conducted benchmark validation. The task has more than **250,000** examples and the dataset is from a different domain about classifying biography. 3. Bias is inherently contextual; what is biased in one context may not be in another. To ensure accuracy and relevance, we focused specifically on the labor market, going beyond generic language use that may not apply well to labor market processes. Finally, the main goal of this paper is to evaluate the bias within LLMs. Please refer to our response to your next question for additional details. --- W2: This is a really good and difficult question. First, we want to emphasize that our final goal is to assess LLM bias, i.e., we are investigating the **change in the score** between input text and output text. By using **the same MLM** for input text and output text, which allow us to isolate and measure the additional bias introduced by the LLM during the text generation process, thereby controlling for the baseline biases present in the MLM used for evaluation. Additionally, we conducted **Control Experiments with Different MLMs**, we selected two more MLMs: BERT-large and DistilBERT. The results for both the human validation score and the LLM correlation plots are provided in the additional PDF. These results demonstrate that changing the MLMs produces consistent outcomes in the scatter density plot for evaluating bias for LLM, and statistical testing results are also highly consistent. Also in lines 507-510, the use of the ranking method helps mitigate bias within MLMs. Indeed, the bias for each MLM affects the bias score calculation, but we would like to highlight that related research, such as using word embeddings for bias score[3], also faces similar questions regarding the bias inherent in the embeddings themselves. This is a big challenge in the field, but we do our best to conduct control experiments and validation methods to mitigate this issue. --- W3: We include baselines in the benchmark validation (see Lines 210-218 and 482-490). One baseline uses pure counting while the other uses static word embedding (word2vec). Our method is the first of its kind that uses contextual information with an MLM, i.e., the information of the whole sentence, other than looking at each word individually. From Figure 4b in the paper, we can observe a better performance compared with other baselines. --- W4: The full word list is the integration of the existing literature in sociology, specifically references[5,6,7] and we directly adopted them without modification. These word lists are already publicly available. And we will publish the wordlists and the corresponding code. Below is the table for the length of each word list: | | Fem | Mas | |------|-----|-----| | psy| 73 | 120 | | role | 7 | 53 | | wfc | 28 | 25 | | gsc | 6 | 6 | As well as some job posting examples: | Gender | Example lexicon | Example job advertisement excerpts | |------------|---------------------------------------------------------|------------------------------------------------------------------------------------| | Masculine | confident, effective, innovative, (pro)active, practical, pragmatic, problem-solving | - **Confident** to work with **high-caliber** people. - Proven ability to be **effective** in a fast-paced, ambiguous, and changing environment. - Encourage new ideas and **innovative** approaches and **actively** share knowledge and experience to enhance the development of the team. | | Feminine | attentive, accurate, timely, caring, polite, diplomatic | - You have strong **attention to detail**. - Responsible for the **timely** and **accurate** maintenance of accounting systems at [town name]. - High level of initiative, maturity, **tact** and **diplomacy**. | --- Rebuttal Comment 1.1: Comment: I thank the authors for responding to my comments. I will maintain my original score --- Reply to Comment 1.1.1: Comment: Dear reviewer HqWb, thank you for your reply. We wonder if there is any further information/clarification we can provide to help clarify your remaining queries and bolster your confidence in our work. Thank you.
Summary: This research investigates the social biases present in ChatGPT-generated job applications using a novel experimental design that responds to real job advertisements. By simulating the job application creation process, the study examines the language patterns and biases that surface when the model is prompted with diverse job postings. Specifically, this paper presents a PRISM algorithm based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cues/words. The results show that the increasing adoption of generative AI can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language. Strengths: 1. This paper provides a comprehensive analysis of the bias problem in LLMs, clearly presenting the motivations behind the model design. 2. A key contribution of this research is the development of a novel bias evaluation framework based on Masked Language Models, which quantitatively assesses social bias using validated inventories of social cues and words. This framework advances existing methods in terms of efficiency and flexibility, requiring minimal human labeling efforts. 3. The model evaluation is convincing, featuring a well-designed experiment to probe the black-box of social biases in ChatGPT models. Weaknesses: 1. The bias evaluation in this paper highly relies on the predefined bias-related word lists, e.g., the Feminine word list and Masculine word list, so how the quality of the word lists will affect the bias evaluation results? It would be better to make analysis on this aspect. 2. The proposed method appears to be generally applied to any domain, although the major focus is on labor market text generation. Therefore, the characteristics of bias in labor market text generation need to be highlighted. 3. There are some writing problems in this paper, for example, the table captions should generally be presented above the table. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How the quality of the word lists will affect the bias evaluation results? 2. The co-occurring frequency of some words are naturally generated, for example, in our common understanding, soft is more related to woman instead of man. Is it reasonable to treat it as bias? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The major limitation is the validation of the basic motivation of assessing social bias based on validated inventories of social cues/words. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer for the time and effort in providing the comments and suggestions. Below, we address each of your points in detail. --- W1: Thank you for your question, which we address in three aspects: 1. **Use Case**: In our settings, we used pre-validated word lists that are widely used in existing research. Our method aims to provide a biased score based on words chosen by these lists that are grounded in expert knowledge of labor market processes across the disciplines of sociology, management/human resources, psychology, and linguistics. Therefore, we are confident in the validity and reliability of the widely used and social science expert-validated word lists. 2. **Algorithmic Aspect**: The design of our algorithm incorporates robustness measures. Specifically, after combining the two sets of probabilities, we only take the top alpha percentage for calculation. 3. **Numerical Experiment**: We have conducted additional experiments to assess the impact of the word-list quality. As shown in Figure 4(b) in the new PDF, the correlation with human labeling decreases as we remove more words from the word list. This additional analysis helps underscore the importance of a comprehensive and accurate word list for reliable bias evaluation results. --- W2: This is an important point. We want to first explain that biases are contextual, i.e. some words are biased in one domain but not in another. So in order to be correct and precise, the word lists we use are grounded in multidisciplinary literature on labor market processes, which goes beyond generic language use that may not necessarily apply well to the labor market context. Our method is easily generalized to other domains by changing the word lists (to other domain-specific ones). Moreover, drawing on interdisciplinary collaboration, we believe our work is a good illustration of combining ML and social science, and we believe our paper fits well into the Neurips stream of ‘Machine learning for social sciences’. --- W3: Thank you for pointing this out. We have fixed the issue you raised and moved the table titles so that they appear above the tables. --- Q1: same as W1 --- Q2: In terms of the specific example you noted, established psychological methods and evidence shows that language associated with characteristics such as “soft” is perceived by readers as “feminine”, which tends to appeal to and resonate more with women rather than men. In the case of job posting and application, this would mean that (1) the use of feminine language (e.g., soft) in job postings tends to appeal more to women (than men) applicants; (2) the use of feminine language in job applications tends to be evaluated more favorably for a post stereotypically perceived to a feminine orientation. Therefore, our definition of bias is embedded in the broader social science and psychological literature on the “biased” impact of language use. We have clarified our approach to bias in our revised paper. This question also pertains to the validity of the word list we use, which we clarify in three aspects: 1. The word lists are established, widely used, and validated in the literature, specifically those developed by [5,6,7]. They are theoretically grounded in social science research. 2. The word lists are specially informed by extensive sociological, psychological, and management research focusing on gender in the labor market. 3. Lastly, we conduct a rigorous internal validation process, achieving a high level of intercoder reliability with experts specializing in sociology, management, and labor market research. --- Limitation: We appreciate that some may view this as a limitation, but we view this as a strength. A great deal of social science research in psychology, linguistics, and sociology has established the relationship between language and social bias. We are able to draw on the strengths of decades of research to inform innovations in ML/NLP. Therefore, we believe our paper fits well into the Neurips stream of ‘Machine learning for social sciences’ by illustrating the strengths of interdisciplinary cross-fertilization.
Summary: This paper investigates the impact of social biases in the application of LLMs within the labor market. The authors examine biases in job applications generated by ChatGPT based on given job posts. They introduce a new bias evaluation method called PRISM (Probability Ranking bIas Score via Masked language model), which combines predefined gender-related word lists from social sciences with a masked language model approach to assess biases in input texts. Experiments with ChatGPT revealed that biases present in job postings are not only replicated but also amplified in job applications created by generative AI in response to these postings. Strengths: 1. This paper is well written. The authors have clearly made their scopes and the reading of the paper is very comfortable. 2. This topic is novel and interesting and could serve as a good study for ML in social sciences. Weaknesses: 1. Experiments are limited. Only ChatGPT has been applied. 2. This paper aims to investigate the social biases, but no detailed exploration in different bias types has been conducted. From the algorithm, I only see the gender bias involved, which I think is only a part of the social biases. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Based on the scoring function in line between 155 and 156, why do you state "A positive score indicates a bias toward a masculine orientation, while a negative score suggests a bias toward a feminine orientation."? Shouldn't it be the reversed reading? 2. As the word lists for bias evaluation are predefined, how do you confirm their reliability? 3. I am interested in the human evaluation part of sec 3.4. Could you include more details about the human experts and the tasks they have done maybe in the appendix, i.e. the demographics, the inter-annotator agreements, and the amount of tasks they did? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Same as the points in Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for the thoughtful comments and heartwarming acknowledgment, as well as the time and effort devoted to the reviewing process. Below please see our response to the specific point in the reviewing comment. --- W1: Our experiments were specifically conducted using ChatGPT due to its wide accessibility and prevalent usage. This choice was deliberate to ensure that our findings are representative of the typical user experience with one of the most commonly used models. However, we acknowledge the importance of testing additional models, and we agree that future work should include experiments with a broader range of models to validate and extend our findings. --- W2: This is a great suggestion, pointing out an important direction for future research, which we have now acknowledged in our revised paper. We focused on gender, as gender inequality in the labor market is well-documented, persistent, and associated with gendered language, making this a prominent and exemplary type of social bias. We were also motivated in part to include just one aspect of bias due to the limited space of the paper. Adding additional dimensions would require providing background on each form of inequality, as well as locating established word lists of linguistic cues associated with those biases. In our revision, we have more clearly articulated that gender biases are chosen as a focal case of a broader range of social biases and that gender bias is important because it underpins one of the most prominent and persisting forms of inequality in the labor market. We have also suggested that future research explore additional forms of bias and labor market inequality --- Q1:Thank you for your question. Because we assign lower numerical ranks to higher probabilities. As illustrated in Figure 2 in the paper, a rank of 1 represents the highest probability. Therefore, a lower mean rank indicates higher probabilities, while higher ranks (larger numerical values) correspond to lower probabilities. We have clarified this in our revised paper to ensure the scoring logic is explicitly defined. --- Q2: The validity of our word lists is grounded in in three aspects: 1. The word lists are established, widely used, and validated in the literature, specifically those developed by [5,6,7]. They are theoretically grounded in social science research. 2. The word lists are specially informed by extensive sociological, psychological, and management research focusing on gender in the labor market. 3. Lastly, we conduct a rigorous internal validation process, achieving a high level of intercoder reliability with experts specializing in sociology, management, and labor market research.: --- Q3: Thank you for pointing this out. We will include this additional information in the appendix: To ensure the scientific rigor of the evaluation, we paid particular attention to inter-rater validity and reliability. Specifically, each phase included individual labeling of data conducted by four independent experts specializing in labor market inequalities associated with gender, work, and family, and Equity, Diversity, and Inclusion (EDI) in the labor market. In each round of labeling, individual labeling was followed by group sharing and discussion of the preliminary outcomes among the four experts. This combination of individual and group analysis allowed the team to trace the score labeling and the validation of word inventory, ensure inter-rater reliability in each phase, and contextualize each phase within relevant scholarly literature, policies, and definitions. The score labeling and the validation were further validated by three additional expert labelers (management, human resource, and social science scholars) from the team. Through a double-blind labeling approach, the three additional experts independently assessed the dimensions and labels produced by the first four experts, demonstrating a high level of consistency. The scores and word lists were then finalized through further deliberation among the four experts and the three additional validators. Because we used an iterative multi-round coding process, the inter-coder consistency rate in the developmental coding varied between 0.6 and 0.8. Notably, the final validation by three fresh validators within the team achieved a high level of inter-coder consistency exceeding 0.8. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the response. I think my initial questions were properly addressed. I will raise my score and would be glad if this paper could be included to the proceedings, which I think would definitely diversify the conference. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Bg7r, we truly appreciate your valuable support and recognition of our paper!
Summary: This paper presents a novel bias detection algorithm, PRISM, and uses it to evaluate biases in job applications generated from prompts that include job postings. The technique uses a masked language model (MLM) to find the probability of “masculine” and “feminine” words replacing the masked word in the text. The bias score is then the difference between average ranks of feminine and masculine words, averaged over all words in the text. The authors find a correlation between biased job postings and biased generated text based on their evaluation method. Strengths: This paper’s primary strengths are that the work is well presented and the text is clear. The problem being tackled, gender bias in LLMs, is also one that is critically important and deserving of study. Weaknesses: I have two significant concerns about this work. First, I have a meta-concern about the setting in which the experiments are being conducted. Is generating a job application from a posting a common use case of LLMs? If there is indeed exacerbation of bias going from posting to application, what are the concrete harms of that manifesting itself? In short, I do not see much in the text motivating the work and helping us understand the real world impacts of these issues. Second, I have a specific concern about the methodology employed. The paper claims that the PRISM method obviates the need for human labeling, and this seems to be accomplished through the use of the MLM and the previously curated lists of words. I am not convinced based on what I have seen in the paper that this method is actually measuring the level of bias in the generated text - it may instead be measuring the gender bias in the underlying MLM (bert-base-uncased) that is used to compute the score. It seems there is a fundamental chicken and egg problem. Is a “feminine” word ranked higher because the surrounding text that was generated is biased, or is it ranked higher because the MLM is biased to predict more feminine words in that context when a neutral word would actually be perfectly appropriate. How would the authors design control experiments to disentangle these two effects? Technical Quality: 1 Clarity: 3 Questions for Authors: Meta question: isn’t the assumption that certain words are “feminine” or “masculine” already a sort of gender bias? I suspect that the paper means that these buckets are words that are stereotypically or historically associated with certain genders, but if that is the case it should be made explicit. Can the authors publish the exact word lists used for the experiment? Why are means of ranks employed as opposed to the means of the probabilities themselves? Similarly, is averaging the S scores to compute B(T) the right approach? What do the actual distributions of S look like? Is the mean capturing the distribution faithfully? What version of ChatGPT was used for this experiment? Because the models are updated fairly frequently, just specifying the use of the API is not sufficient for reproducibility. Confidence: 4 Soundness: 1 Presentation: 3 Contribution: 2 Limitations: The paper discusses limitations briefly in the appendix, and there is a very brief discussion about the potential for biases in the MLM to affect the results. However, I believe there needs to be significantly more validation that this approach is sound based on the concerns I enumerated above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the detailed and thorough comments. Below please find our responses to specific comments: ---- W1: the use of LLMs like ChatGPT for generating job applications is indeed increasingly common among job seekers. This trend has been highlighted in discussions and concerns raised by employers and HR professionals, as reported by high-profile news such as CNBC (May 6, 2024, article: "Exact same cover letters word for word: Career consultant says Gen Z are misusing AI") and Vox (Mar 8, 2023, article: "Maybe AI can finally kill the cover letter: Jobs still require cover letters. Apps like ChatGPT can help."). Moreover, the academic community is beginning to explore this use case, focusing both on its technical applications[1] and ethical/bias implications[2]. Our study contributes to this burgeoning body of literature by addressing key public, organizational, and scholarly concerns regarding the use of LLMs in generating job applications. We explicitly identify the two major harms resulting from bias exacerbation in LLM-generated job applications: 1. **Structural Harm**: LLM-generated job applications that perpetuate gender stereotypes contribute to the reinforcement of gender inequalities embedded in language used in labor market processes. Job postings and applications are critical steps in these processes and play a significant role in the reproduction of gender inequalities. By failing to challenge these biases, LLMs can inadvertently support the perpetuation of these inequities. 2. **Practical Harm**: As gender-biased LLM-generated job applications become part of the training data for future AI applications in HR(both in generating job postings and assessing applications), there is a risk of further entrenching gender biases in language use. This entrenchment can lead to cascading effects, such as increased labor force gender segregation, which have significant societal implications. --- W2: This is a really good question. We want to emphasize that our goal is to assess LLM bias, i.e., we are investigating the **change in the score** between input and output text. Using the **same MLM** for input and output text, allows us to isolate and measure the additional bias introduced by the LLM during the text generation process, thereby controlling for the baseline biases present in the MLM used for evaluation. Additionally, we conducted **Control Experiments with Different MLMs**, we selected two more MLMs: BERT-large and DistilBERT. The results for both the human validation and the LLM correlation plots are in the additional PDF. These results show that changing the MLMs produces consistent outcomes in the scatter density plot for evaluating bias for LLM, and statistical testing results are also highly consistent. Indeed, the bias for each MLM affects the bias score calculation, but we would like to highlight that related research, such as using word embeddings for bias score[3], also faces similar questions regarding the bias inherent in the embeddings themselves. This is a big challenge in the field, but we do our best to conduct control experiments and validation methods to mitigate this issue. --- Q1: You are correct about our intended meaning, that words are stereotypically or historically associated with certain genders. We clarified that our word lists capture feminine and masculine orientations that are grounded in and associated with gender norms and stereotypes, as supported by extensive sociological, psychological, and linguistics research[4,5,6]. Notably, while some of this work is more historical in nature(evidence from decades ago), more recent research cited below also establishes the continuing importance of gendered language in contemporary settings. --- Q2: We would like to clarify that the word lists used in our experiment were not created by us specifically for this paper. Instead, we directly adopted them from existing literature across the social science research as referenced in [5,6,7], without modification. These word lists are publicly available, and we will publish the word lists along with the corresponding code. Using pre-validated word lists is a key advantage of our method, allowing us to easily incorporate established, rigorous, and theoretically grounded word inventories from social science research (see lines 132–136). Additionally, this paper also draws on the expertise of several co-authors specializing in sociology, management, and labor market research to provide further internal validation of the word lists. --- Q3: Rank has several advantages over probabilities: **Theoretical Properties**: Ranks exhibit normality and allow for a rigorous formulation of the test statistic and its asymptotic result as presented in Theorem 1, which is based on the Wilcoxon rank sum test. Probabilities do not possess these important properties. **Sensitivity and Robustness**: Direct use of probabilities can be overly sensitive due to the highly skewed nature of prediction distributions. By using ranks, we can mitigate this sensitivity, and enhance the robustness of our results. This robustness enhancement also helps mitigate the inherent biases present in these pre-trained models. --- Q4: From a Bayesian view, when we lack prior information, the mean serves as a robust estimator of central tendency. However, we acknowledge that for future work, incorporating prior information for a weighted average or other aggregation methods could potentially improve performance. The asymptotic distribution of S is normal. To validate this, we have included a histogram of S in the PDF. Additionally, we conducted the Shapiro-Wilk test, which resulted in a p-value of 1.00, confirming that S is indeed normally distributed --- Q5: We employed the GPT-3.5 Turbo, as it was the most accessible and widely used free version at the time of our experiment. This choice was made to ensure that our results are representative of the typical user experience. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their thorough responses. I believe you have addressed my concerns about the weakness of the MLM usage. As such, I am adjusting my score to 7. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear reviewer e7QL, thank you for your kind support and recognition of our work.
Rebuttal 1: Rebuttal: Dear reviewers, Thank you so much for your time and effort in reviewing our paper. We want to highlight some key aspects of our paper: 1. Our method represents the first research on unsupervised bias evaluation in text using contextual information. 2. Our word lists are based on established word inventories, grounded in social science/labor market theories and tried and tested in a wide range of research (cited below). Our team, particularly social science scholars in our team, have provided further expert validation of the word lists to doubly ensure the validity and robustness of the word lists. 3. Our work exemplifies the interdisciplinary synergy between machine learning and social science, aligning well with the NeurIPS stream of "Machine Learning for Social Sciences." We hope the reviewers will consider the significant social science impact and contributions of our paper. --- Our team's human validation detail: To ensure the scientific rigor of the evaluation, we paid particular attention to inter-rater validity and reliability. Specifically, each phase included individual labeling of data conducted by four independent experts specializing in labor market inequalities associated with gender, work, and family, and Equity, Diversity, and Inclusion (EDI) in the labor market. In each round of labeling, individual labeling was followed by group sharing and discussion of the preliminary outcomes among the four experts. This combination of individual and group analysis allowed the team to trace the score labeling and the validation of word inventory, ensure inter-rater reliability in each phase, and contextualize each phase within relevant scholarly literature, policies, and definitions. The score labeling and the validation were further validated by three additional expert labelers (management, human resource, and social science scholars) from the team. Through a double-blind labeling approach, the three additional experts independently assessed the dimensions and labels produced by the first four experts, demonstrating a high level of consistency. The scores and word lists were then finalized through further deliberation among the four experts and the three additional validators. Because we used an iterative multi-round coding process, the inter-coder consistency rate in the developmental coding varied between 0.6 and 0.8. Notably, the final validation by three fresh validators within the team achieved a high level of inter-coder consistency exceeding 0.8. --- Reference: [1] Al Shalabi, H., Al-Hashemi, R., & Al-Ramadin, T. A. (2013). Automatic Cover Letter Generator System from CVs (ACLGS). Global Journal of Computer Science and Technology Software & Data Engineering, 13(3). [2] Bárány, A. (2023). From Humans to Machines: Can Artificial Intelligence Outsmart Human Job Applications?. [3] Dhamala, J., Sun, T., Kumar, V., Krishna, S., Pruksachatkun, Y., Chang, K. W., & Gupta, R. (2021, March). Bold: Dataset and metrics for measuring biases in open-ended language generation. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 862-872). [4] Coates, J. (2015). Women, men, and language: A sociolinguistic account of gender differences in language. Routledge. [5] Bem, S. L. (1974). The measurement of psychological androgyny. Journal of consulting and clinical psychology, 42(2), 155. [6] Gaucher, D., Friesen, J., & Kay, A. C. (2011). Evidence that gendered wording in job advertisements exists and sustains gender inequality. Journal of personality and social psychology, 101(1), 109. [7] Konnikov, A., Denier, N., Hu, Y., Hughes, K. D., Alshehabi Al-Ani, J., Ding, L., ... & Tarafdar, M. (2022). BIAS Word inventory for work and employment diversity,(in) equality and inclusivity (Version 1.0). SocArXiv. [8] Wang, R., Cheng, P., & Henao, R. (2023, April). Toward fairness in text generation via mutual information minimization based on importance sampling. In International Conference on Artificial Intelligence and Statistics (pp. 4473-4485). PMLR. Pdf: /pdf/d851e0b0f1ccaa350c8ec882d6304f0a76d04934.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Generalizing Consistency Policy to Visual RL with Prioritized Proximal Experience Regularization
Accept (poster)
Summary: The paper presents a novel method using consistency models as a policy parameterization to address online reinforcement learning. The paper first uses the dormant ratio metric to study properties of the consistency-AC method and determine that it is suitable for online RL. Then, it presents the CP3ER method, which adds an entropy regularization term to consistency AC in order to make it suitable for online RL. Finally the paper presents results showing strong performance in image-based RL benchmarks. Strengths: - Diffusion and consistency policies are an increasingly important research area, and in particular there has been limited work training these with online reinforcement learning. This paper seems to take an important step towards that direction, and thus has the potential to be significant. - Although several other works have used similar setups to apply DP/CP to offline RL, the specific formulation the authors employ to address online RL is novel as far as I can tell. - The method overall seems principled and the results presented in the paper seem promising. - Although there are issues with section 4 which I discuss extensively in the 'Questions' section, I like what the authors are trying to do and think if the section is written by more detail it can strengthen the case for their method. Weaknesses: - I think the biggest issue with the paper is that the story is a bit muddled. The core contribution of the paper seems to be CP3ER and the PPER regularization method. These things should be able to work for both low-dimensional and image-based observations. However, the authors specifically state that their goal is to address image-based observations. Why? Does this algorithm no longer show any benefit in these settings? The algorithm doesn't have any contributions that specifically focus on addressing image-based observations so I don't understand why that is a central focus of the evaluation. - In order to properly contextualize this algorithm I think it would be necessary to compare it to low-dim baselines like SAC and PPO on a low-dim version of the benchmarks in addition to the state-based versions. - In the introduction the authors mention that generative models like diffusion and consistency are helpful for RL because they allow the policy to represent complex behaviors. However, none of the tasks in the training suite are 'complex' enough that they couldn't be solved by a simple MLP policy. It would be nice to see perhaps some tasks where a policy that can sample multimodal actions shines above MLP-based policies. - Section 4 is confusing and there are several other minor presentation issues that I explain in the 'Questions' section Technical Quality: 3 Clarity: 2 Questions for Authors: Random Questions / Comments - I find the analysis related to figure 1 to be incomplete and hard to follow. - First, it would be helpful to explain somewhere in your paper what we can infer about a neural network based on its dormant ratio, which would make this experiment easier to understand. Your paper seems to assume the reader is very familiar with Sokar et. al. but this need not be the case - How big are the datasets? Where does the online dataset come from (ie what RL algorithm, how long was it trained, how often do you update the online dataset, etc)? - Why is there a difference between online and offline dormant ratio for halfcheetah and not walker2d? And how can you draw conclusions from this experiment when you only run two environments that seem to get different results? - Figure 2 experiment is also missing a lot of details - What AC algorithm does 'Q-loss' refer to? - Where does the data for this experiment come from? - Once again, I'm having trouble understanding how to interpret these plots or how you come to your conclusion. - Figure 3 has similar issues to the last two - The results as they are look nice but I have a few comments - I think the set of baselines is incomplete. It would be nice to see a SOTA on-policy method like PPO - All of the results show normalized scores, but you never explain how they are normalized, which makes them impossible to compare to other papers Nit picks: - It would be nice if you could add one sentence in English describing what a dormant neuron is to section 3.3 for those who are not familiar with it. It would also be nice to explain why dormant ratio is important for the expressive capabilities of a network - Line 134: newtork Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors address limitations and broader impacts of their work in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for the careful review and constructive feedback. Please note our global response with the attached PDF. In the modified revision, we expound on the correlation between the dormant ratio and the performance of the policy network in Section 3.3 as shown in the global response. > These things should be able to work for both low-dimensional and image-based observations. Does this algorithm no longer show any benefit in these settings? In this work, we propose a proximal policy regularization method to stabilize the training of the consistency policy to improve the sample efficiency of visual RL. CP3ER indeed has the potential to be applied to state-based RL tasks. We supplement comparisons with SOTA methods[1]. The results are shown in Table 1 of the attached PDF. Compared to current SOTA methods, CP3ER demonstrates significant advantages in state-based RL tasks, further proving the generalization capability of CP3ER across different RL tasks. > It would be nice to see perhaps some tasks where a policy that can sample multimodal actions shines above MLP-based policies. Diffusion/consistency models build the policy distribution without modifying the structure of the policy network. Our policy network utilizes a simple MLP network, which is the same as the networks in Consistency-AC [2] and Diffusion-QL [3]. > How big are the datasets? Where does the online dataset come from (ie what RL algorithm, how long was it trained, how often do you update the online dataset, etc) The pre-collected dataset from D4RL (medium-replay-v2) is used as the offline data, which is from the replay buffer of SAC, comprising 1 million transition tuples. For online training, we employ SAC as the baseline and sample data from the replay buffer to train consistency policy with consistency loss. The replay buffer is updated with each interacting step. For Figure 2, we use the same setting for consistency loss. > Why is there a difference between online and offline dormant ratio for halfcheetah and not walker2d? And how can you draw conclusions from this experiment when you only run two environments that seem to get different results? The dormant ratio change trends in the two subplots are the same, even though there are slight differences in the specific details. We speculate that the diversity of samples and actions in the halfcheetah dataset is lacking, leading the consistency policy to quickly fit the data distribution and overfit during offline training. In contrast, during the online training, due to the continuous updating of data, the policy network remains active. In the walker2d dataset, the diversity of the samples and action is high, requiring a certain training step to fit, resulting in the same as online learning. Due to dataset limitations (3 medium-replay datasets are available in D4RL), we conduct 3 tasks when analyzing the impact of offline/online settings on the dormant ratio of the policy. To analyze the effects of training losses and observations on the dormant ratio, we perform 5 different tasks, respectively. Curves not included in the main paper are available in the attached PDF, specifically in Fig. B and C. > What AC algorithm does 'Q-loss' refer to? Q-loss refers to Equation [2] in the paper, which is the loss used to update the policy. > (Figure 2 and 3) I'm having trouble understanding how to interpret these plots or how you come to your conclusion. When training the consistency policy using Q-loss, the variance in the dormant ratio of the policy network is quite large. This is because, under some random seed settings, the dormant ratio rapidly increases and remains at a relatively high level throughout. Subsequent training hardly changes the number of active neurons in the network, causing the policy to fall into a dormant where it outputs almost the same actions with different state inputs. With Fig. B and C in the attached PDF, this phenomenon indicates the performance descent. Compared to the state, the network's dormant ratio with the image quickly rises to high values with small variance, indicating severe degradation. With Fig. C in the attached PDF, we can observe that the policy degradation phenomenon of Consistency-AC in visual RL is severe. > It would be nice to see a SOTA on-policy method like PPO For visual RL, sample efficiency is a common issue. As an on-policy method, PPO does not exhibit notable sample efficiency, which is why it has not been extensively studied in visual RL. Instead, there is more focus on off-policy methods, such as shown in [4-7]. Additionally, in state-based RL, PPO performs poorly compared to other methods(Table 1 in the attached PDF). Furthermore, our motivation is to improve sample efficiency in visual RL, we believe there is no need to compare with on-policy methods like PPO in visual RL tasks. > All of the results show normalized scores, but you never explain how they are normalized, which makes them impossible to compare to other papers. We used the metrics recommended in [8] to analyze and compare the performance of algorithms. Additionally, we provide the unnormalized IQM curves in the appendix (Figure 12, 14, and 16) for comparison with results from other papers. [1] Boosting Continuous Control with Consistency Policy, AAMAS 2024. [2] Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning, ICLR 2024 [3] Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning, ICLR 2023 [4] DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization, ICLR 2024(Figure 2, 11) [5] Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning, ICLR 2021 [6] TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning, NeurIPS 2023 [7] Stabilizing Off-Policy Deep Reinforcement Learning from Pixels, ICML 2023 [8] Deep Reinforcement Learning at the Edge of the Statistical Precipice, NeurIPS 2021. --- Rebuttal Comment 1.1: Title: Most concerns addressed Comment: Thanks for running more experiments and addressing most of my concerns. I'll increase my score to a 5. My one major remaining concern is the following from my original review: > In the introduction the authors mention that generative models like diffusion and consistency are helpful for RL because they allow the policy to represent complex behaviors. However, none of the tasks in the training suite are 'complex' enough that they couldn't be solved by a simple MLP policy. It would be nice to see perhaps some tasks where a policy that can sample multimodal actions shines above MLP-based policies. The authors pointed out that their policy is indeed still an MLP which indicates to me that I may have worded my complaint in a confusing manner. Here when I say 'MLP policy' I'm referring to a setup that directly maps an input state to a deterministic action, Gaussian distribution, or something similar, and examples of algorithms that use such policies would be vanilla SAC and DDPG. On the other hand, the authors use a consistency policy which, while still parameterized by an MLP, uses the consistency model framework in order to efficiently simulate the reverse diffusion process. The authors motivate this decision in the intro when they argue that consistency policies are more capable of addressing complex tasks that would not be solvable with a Gaussian policy. However, none of the tasks in the paper are too complex to be solved by a Gaussian, so I feel that it doesn't make sense to claim this setup is more capable of such tasks. --- Reply to Comment 1.1.1: Comment: Thank you for recognizing our work. In the introduction, we mentioned that unimodal policy such as Gaussian policy cannot model complex behavior. The complex behavior here refers to the complex exploration behavior caused by multi-modal reward landscapes. This situation exists in many tasks, which is one of the reasons why stochastic policy is needed. Previous works[1][2] have also shown that compared to unimodal policy distributions, multimodal distributions can escape from local optima easily, thereby achieving better sample efficiency and performance. Section 6.2.1 of the paper demonstrated the Gaussian distribution's limitations of exploration in this scenario with the 1D continuous bandit problem[2]. Figure 8 shows that due to the expressiveness to represent complex behavior, CP3ER achieves diverse exploration behaviors and better performance. In addition, experimental results(Section 6.1.1, Figure 6, and Section B.2 in Appendix, Figure 14 and 15.) on hard tasks of DeepMind control suite showed that the baseline DrQ-v2 using Gaussian policy hardly learned meaningful behavior within 5M steps, while CP3ER achieved a relatively good performance, indicating that in high-dimensional continuous action spaces, CP3ER has stronger exploration ability compared to Gaussian policy. [1] Reinforcement Learning with Deep Energy-Based Policies, ICML 2017 [2] Reparameterized Policy Learning for Multimodal Trajectory Optimization, ICML 2023 --- Reply to Comment 1.1.2: Comment: Thank you again for your time in helping us improve our work. We hope the reply can address your concerns. We sincerely appreciate your recognition of our contribution and vote to accept our work!
Summary: This paper proposes a novel method that employs a consistency model for policy parameterization in online visual reinforcement learning settings. It initially identifies the issues of prior consistency actor-critic methods in high-dimensional observation online RL settings through the lens of dormant ratio. Subsequently, it proposes to incorporate entropy regularization alongside a prioritized sampling strategy to regularize the policy. Empirical experiments on 21 tasks across both the DeepMind Control Suite and MetaWorld demonstrate the effectiveness of the proposed method. Strengths: - Overall, the paper is well-organized and presented. - The analysis of the limitations of existing consistency policy training in visual RL settings is thorough and convincing. - The experimental results across a wide range of tasks are promising compared to previous state-of-the-art methods. - The paper includes a good set of ablation studies to support the design choices of the proposed algorithm. Weaknesses: - Lack of baseline comparison in the experiments - The scope of the paper is limited to online and visual deep reinforcement learning - The proposed modification is straightforward compared with existing algorithms. Technical Quality: 3 Clarity: 3 Questions for Authors: - Although CP3ER is mainly designed for online visual RL, the proposed modifications (entropy regularization, prioritized replay, Gaussian mixture distribution of value functions) do not seem to be limited to visual observation settings. It would be interesting to see how the proposed algorithm performs in state-based RL settings. - For the 8 hard DeepMind Control Suite tasks and also Metaworld tasks, the authors compare CP3ER with TACO/ALIX/DrQ-v2 but not with DrM, which was reported in the original DrM paper to achieve good empirical performance on these tasks. This comparison here is missing. - For the analysis (Figures 2, 3, 9), it would be beneficial to include the episodic return in the plots so that readers can understand the numerical performance of the policy in addition to the policy network’s dormant ratio, which is an intrinsic factor not necessarily correlated with the policy’s episodic return. - The conclusion drawn from lines 202-203, “Therefore, we can infer that visual RL will exacerbate the instability of consistency policy training caused by the Q-loss under the actor-critic framework,” does not seem convincing. Even in ordinary policy Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and helpful suggestions. Please note the global response and the attached PDF. > The proposed modification is straightforward compared with existing algorithms. The key challenge lies in addressing the instability during the training of consistency policy. By analyzing changes in the dormant ratio of the policy network, we observed a policy degradation phenomenon and speculate that this may be due to the instability of the data distribution and the score function represented by the Q-function. Therefore, we introduce entropy regularization. From the perspective of RL, our goal is to imbue the policy with specific attributes by modifying the loss, such as maximizing entropy to boost sample efficiency. From the perspective of matching distributions, entropy regularization provides a prior for the target data distribution, with the Q-function weighting this prior, thus stabilizing the optimization objective of the consistency model. > It would be interesting to see how the proposed algorithm performs in state-based RL settings. CP3ER can indeed be applied to state-based RL. Following [1], we selected 6 challenging tasks from the DeepMind Control Suite(DMC) environment to evaluate the methods. Table 1 in the attached PDF shows the results of different methods within 500k interacting steps. Compared to commonly used RL methods, CP3ER has significant performance advantages and also outperforms the current SOTA diffusion model-based method CPQL. > For the 8 hard DeepMind Control Suite tasks and also Metaworld tasks, the authors compare CP3ER with TACO/ALIX/DrQ-v2 but not with DrM, which was reported in the original DrM paper to achieve good empirical performance on these tasks. This comparison here is missing. DrM is indeed an excellent method for visual RL. In fact, we considered DrM in the appendix. Figures 14 and 15 show the comparison of DrM on DMC hard tasks. Figures 16 and 17 show the comparison results on Meta-world tasks. We did not include these results in the main part of the paper because when we reproduced DrM results using the official code[2], we could not achieve the performance claimed on Meta-world tasks. To obtain a fairer comparison, we compared our method and the reproduced results with the curves from [3], as shown in Table 2 of the attached PDF. Even compared to the results in the DrM paper, our method achieves comparable or even better performance. We have also included performance curves for several methods on DMC-hard and Meta-world tasks in the attached PDF(Fig. A, the left part shows curves on DMC-hard tasks and the right part shows curves on Meta-world tasks.). > For the analysis (Figures 2, 3, 9), it would be beneficial to include the episodic return in the plots so that readers can understand the numerical performance of the policy in addition to the policy network’s dormant ratio, which is an intrinsic factor not necessarily correlated with the policy’s episodic return. Thank you very much for your suggestion. We have added an explanation of the relationship between the dormant ratio and policy performance after Section 3.3 in the revised version as shown in the global response. However, it is important to note that the dormant ratio is a measure of the capacity and the expressiveness of the network, and it is not a sufficient and necessary condition for assessing policy performance. A policy network with a low dormant ratio may have a low episode return. For example, when trained with supervised learning (such as consistency loss), the network typically exhibits a low dormant ratio, but since its goal is to fit the data distribution rather than to maximize average returns, its policy performance may not necessarily be superior. To adequately assess the policy degradation phenomenon caused by Q-loss, we compared the performance of the consistency policy trained with Q-loss to SAC (the online data collection policy required when training with consistency loss). We also compared the dormant ratios of the policy networks in SAC and redrew Figure 2 (Figure B middle and right, and Figure C in the attached PDF). Consistency policy with Q-loss has a higher dormant ratio compared to those trained with consistency loss and SAC. We revised Figure 3 and 9 from the original paper with average returns as the other axis. The modifications are shown in Figures D and E in the attached PDF. The results from the figures indicate that generally, the higher the dormant ratio of the policy network, the poorer the performance of the policy. Visual RL tasks exacerbate the degradation phenomenon of consistency policies, resulting in poor policy performance. > The conclusion drawn from lines 202-203, “Therefore, we can infer that visual RL will exacerbate the instability of consistency policy training caused by the Q-loss under the actor-critic framework,” does not seem convincing. Even in ordinary policy In the third part of Section 4, we analyzed consistency policy, particularly whether Consistency-AC is suitable for visual RL tasks. Figure 3 shows the dormant ratio of the policy network under different observations. Compared to state inputs, the network's dormant ratio rapidly rises to very high values with visual inputs, and the variance is small. This indicates that this phenomenon will stably exist, unaffected by random seeds. Compared to state-based RL tasks, we can see that the policy degradation phenomenon of Consistency-AC in visual RL is quite severe. Consistency-AC adopts a Q-loss under the Actor-Critic framework. Thus, visual RL exacerbates the instability of consistency policies. In the modified revision, we add the explanation in 3.3 discussing the relationship between the dormant ratio and the policy performance. [1] Boosting Continuous Control with Consistency Policy, AAMAS 2024 [2] https://github.com/XuGW-Kevin/DrM [3] DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization, ICLR 2024 --- Rebuttal 2: Comment: We hope the reply can address your concerns. We sincerely appreciate your recognition of our contribution and vote to accept our work!
Summary: The paper addresses challenges in visual RL with high-dimensional state spaces, specifically focusing on improving sample efficiency and training stability. It introduces a novel method called CP3ER, which incorporates sample-based entropy regularization and prioritized proximal experience regularization to stabilize policy training and enhance sample efficiency. The proposed CP3ER achieves state-of-the-art performance in 21 tasks across the DeepMind control suite and Meta-world, demonstrating the effectiveness of applying consistency models to visual RL. The paper identifies and addresses the instability issues caused by the Q-loss in the actor-critic framework and the non-stationary distribution of online RL data. Experimental results show that CP3ER outperforms existing methods without relying on additional exploration strategies or auxiliary losses. Strengths: 1. The introduction of CP3ER, which combines consistency models with prioritized proximal experience regularization, presents a novel approach to addressing challenges in visual RL, enhancing both sample efficiency and training stability. 2. The paper provides comprehensive empirical evidence demonstrating that CP3ER achieves state-of-the-art performance across a wide range of tasks in the DeepMind control suite and Meta-world. 3. The paper provides a thorough analysis of the impact of non-stationary distributions and the actor-critic framework on consistency policy, offering insights into the underlying mechanisms that affect policy training stability. Weaknesses: In general, the paper is well-written and easy to follow. However, there is no illustration or theoretical support for using Eq. (8) for sampling weight. It would be better for the authors to consider more theoretical analysis on the solution design. Technical Quality: 3 Clarity: 3 Questions for Authors: Why using eq. (8) for sampling weight? Can other functions that shares a similar look demonstrated in Fig 4(b) also work? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations have been discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your recognition of our work. > Why using eq. (8) for sampling weight? Can other functions that shares a similar look demonstrated in Fig 4(b) also work? In fact, Equation (8) is an empirical formula inspired by the Sigmoid function, and it has been improved to meet the desired properties. We aim to sample the most recently collected data with a high probability during the sampling process while sampling older data with a relatively lower probability. If other formulas can satisfy the required properties (Fig. 4(b)), they may also be used to generate sampling weights.
Summary: The paper analyzes the problems faced by extending consistency policy to visual RL under the actor-critic framework and discovers the phenomenon of the collapse of consistency policy during training under the actor-critic framework by analyzing the dormant rate of the neural networks. The authors propose a consistency policy with prioritized proximal experience regularization (CP3ER), which employs entropy regularization to constrain policy behavior. The experiments evaluate CP3ER with recent baselines (DrQ-v2, ALIX, TACO), on 21 visual control tasks. Strengths: - The paper is well-written and organized, and includes a thorough discussion of the relevant related works. - Sec. 4 analyzes the Consistency Actor-Critic from the perspective of dormant rates, which is very interesting. - The experimental results showcase impressive performance gains over the state-of-the-art on a wide range of robotics tasks (e.g., DM-Control, Meta-World). Ablations are also provided to delineate the impact of each modification. Weaknesses: - The method is not directly applicable to domains with discrete action spaces. - The experiments do not compare with other consistency RL methods. Technical Quality: 3 Clarity: 3 Questions for Authors: - According to [1], dormant rates in reinforcement learning and supervised learning represent different meanings. Can you explain in detail how dormant rates affect the expression ability of the consistency policy during training? For example, what does a high dormant rate indicate? What does an increasing dormant rate signify? And why? - [1] Sokar, Ghada, et al. "The dormant neuron phenomenon in deep reinforcement learning." *International Conference on Machine Learning*. PMLR, 2023. - In Sec. 4, Fig. 3, since Q-loss under the actor-critic framework will destabilize the consistency policy training, why was only the Q-loss used during the training process? - How do the computational cost and runtime compare with other baselines? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the weaknesses and questions section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable comments. Please note the global response with the attached PDF. > The method is not directly applicable to domains with discrete action spaces. Indeed, our method has not yet been extended to discrete action spaces because we use consistency models to model the policy, and most diffusion/consistency models are currently applied in continuous state spaces. Therefore, our proposed method is also subject to this limitation. However, to extend it to discrete action spaces, one possible approach could be to binarize the discrete actions and then map them to continuous spaces, as suggested in [1]. This could be an interesting and worthwhile direction for future exploration. > The experiments do not compare with other consistency RL methods. Our proposed method aims to address the issue of sample efficiency in visual RL. To the best of our knowledge, as of the time of paper submission, CP3ER is the only one to apply diffusion/consistency models to online visual RL policy. This is why we have not compared it with other RL methods based on diffusion models or consistency models. In the realm of state-based RL, there are already some methods based on diffusion models or consistency models, and we have compared our method with them. The results are shown in Table 3 of the attached PDF. Following the settings in [2], we compared CP3ER with Consistency-AC and online Diffusion-QL. As can be seen from the results, although our method is designed to visual RL problems, it still demonstrates significant advantages when extended to state-based RL tasks compared to existing methods based on diffusion or consistency models. > Can you explain in detail how dormant rates affect the expression ability of the consistency policy during training? For example, what does a high dormant rate indicate? What does an increasing dormant rate signify? And why? The dormant ratio of a neural network represents the proportion of neurons that are inactive. The higher dormant ratio indicates fewer active neurons in the neural network, implying the network's capacity and expressiveness are damaged. The policy network's dormant ratio in RL is related to episode return. A higher dormant ratio hints at lazy action in RL with lower episode returns. Conversely, when the performance is good, the policy network is usually more active, and the dormant ratio is usually lower. This phenomenon has been observed in several studies [3-6]. > In Sec. 4, Fig. 3, since Q-loss under the actor-critic framework will destabilize the consistency policy training, why was only the Q-loss used during the training process? In the third part of Section 4, our main objective is to investigate whether consistency policy is suitable for visual RL tasks. Consistency-AC [2], a typical representative of consistency policy, uses only Q-loss during training. Therefore, we maintained the same settings in this part of the experiment. Fig. 3 illustrates the impact of different observations on the dormant ratio of the consistency policy network. The phenomena in the figure show that consistency policy in visual RL faces a very high dormant ratio, making policy training challenging. From Fig. 3, we can also conclude that Consistency-AC is not suitable for visual RL learning tasks. > How do the computational cost and runtime compare with other baselines? We evaluated the training and inference time of CP3ER separately, comparing it with the methods mentioned in the paper. The GPU used is 2080Ti and the batch size for training is 256. The results are shown in the following table. Compared to TACO, our proposed CP3ER does not show a significant increase in training time and inference time costs. During training, the additional time cost of CP3ER compared to DrQv2 is mainly attributed to the computation of the regularization loss term. | | DrQv2 | ALIX | TACO | DrM | CP3ER | | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | |Training Time per Batch (s) | 0.0339 | 0.0332 | 0.1919 | 0.04075 | 0.04668 | | Inference Time per Step (s) | 0.00132 | 0.00141 | 0.00131 | 0.00184 | 0.00218 | [1] Structured Denoising Diffusion Models in Discrete State-Spaces, ICLR 2023. [2] Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning, ICLR 2024 [3] DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization, ICLR 2024(Figure 2, 11) [4] In deep reinforcement learning, a pruned network is a good network, Arxiv 2024(Figure 11) [5] The Dormant Neuron Phenomenon in Deep Reinforcement Learning, ICML 2023 (Figure 9, 11, 16) [6] Pretrained Visual Representations in Reinforcement Learning, Arxiv 2024 (Figure3, 4) --- Rebuttal 2: Comment: We hope the reply can address your concerns. We sincerely appreciate your recognition of our contribution and vote to accept our work! --- Rebuttal Comment 2.1: Comment: I appreciate the authors' clarifications and still remain in favor of acceptance.
Rebuttal 1: Rebuttal: **Please see the attached one-page PDF for additional experimental results.** We would like to express our sincere gratitude for the efforts and valuable feedback provided by all the reviewers. We are very pleased that our work has been recognized by the reviewers, and we have also noted some points that have been commonly raised: ## Can CP3ER extend to state-based RL? Both reviewers *UMZF* and *ceLx* have raised concerns regarding whether our proposed CP3ER can be generalized to state-based RL tasks. Our initial motivation was to address the issue of sample efficiency in visual RL. The high expressiveness of the policy helps to improve exploration and thereby enhances sample efficiency. Directly introducing diffusion/consistency models into visual RL results in severe policy degradation, preventing policy training. Therefore, we proposed a proximal policy regularization method to stabilize the training of consistency policies in visual RL. We also recognize that CP3ER indeed has the potential to be applied to state-based RL tasks. Consequently, we have included additional experiments in the attached PDF (Table 1 and Table 3) comparing CP3ER with SOTA methods in state-based RL. We found that CP3ER still exhibits advantages in state-based tasks. However, these advantages are not as pronounced as in visual RL tasks, because while policy degradation also occurs in state-based tasks using diffusion/consistency models, it is not as severe as in visual RL. These results also support the discussion in the last paragraph of Section 4 of our paper. ## What is the relationship between dormant ratio, policy degradation, and policy performance? In Section 4, reviewers *TpQn*, *UMZF*, and *ceLx* care about how the conclusion of the policy degradation derived from the changes in the dormant ratio of the policy network. It must be acknowledged that in writing the paper, we neglected that readers might not be familiar with the concept of dormant ratio in RL. As introduced in [3], the dormant ratio of a neural network indicates the proportion of inactive neurons and is typically used to measure the activity of the network. A higher dormant ratio implies fewer active neurons in the network, implying the network's capacity and expressiveness are damaged. In RL, the episode return is closely related to the dormant ratio of the policy network. A higher dormant ratio results in more lazy action outputs, inactive agent behavior, and lower episode returns; conversely, when policy performance is good, the policy network is usually more active, and the dormant ratio is typically lower. This phenomenon has been reported in [3-6]. Based on these findings and the results in Section 4, Q-loss is identified as the primary reason for the increase in the dormant ratio of consistency policy. An increase in the dormant ratio often signifies a decline in policy performance (We have added additional curves of dormant ratio and performance in Fig. B, C, and D of the attached PDF. ), thus indicating policy degradation. According to the results in Figure 3 of the paper (Figure D in the attached PDF. Considering the page limit, the tasks that have already appeared in the main paper have not been included in the attached PDF.), it can be concluded that consistency policy in visual RL tasks faces severe policy degradation. In the modified revision, we expound on the correlation between the dormant ratio and the performance of the policy network in Section 3.3 as follows: > As introduced in [3], the dormant ratio of a neural network indicates the proportion of inactive neurons and is typically used to measure the activity of the network. A higher dormant ratio implies fewer active neurons in the network, implying the network's capacity and expressiveness are damaged. In RL, the episode return is closely related to the dormant ratio of the policy network. A higher dormant ratio results in more lazy action outputs, inactive agent behavior, and lower episode returns; conversely, when policy performance is good, the policy network is usually more active, and the dormant ratio is typically lower. This phenomenon has been reported in [3-6]. Regarding the other issues raised by the reviewers, we provide detailed responses below. We strive to address all concerns raised by the reviewers. If there are any further questions, we are also very pleased to discuss them. We have made revisions to the paper addressing certain issues. In the revised version, changes will be highlighted in blue. [1] Boosting continuous control with consistency policy, AAMAS 2024 [2] Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning, ICLR 2024 [3] The dormant neuron phenomenon in deep reinforcement learning, ICML 2023 (Figure 9, 11, 16) [4] DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization, ICLR 2024(Figure 2, 11) [5] In deep reinforcement learning, a pruned network is a good network, Arxiv 2024(Figure 11) [6] Pretrained Visual Representations in Reinforcement Learning, Arxiv 2024 (Figure 3, 4) Pdf: /pdf/e6e21b44dd2293d0e2cd132dd098be182ab8f290.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation
Accept (poster)
Summary: The paper proposes SaSPA, which uses several large, pre-trained generative, language, and vision-language models to produce high-quality synthetic images. Specifically they focus on images for FGVC, where the high inter-class similarities make it challenging to synthesize data that is not only diverse (capturing the intra-class variability) but also faithful to the intended class label. They combined GPT-4 with BLIP ControlNet diffusion, with edge map conditioning and post-generation filtering (via top-10 ranking with CLIP) to achieve performance gains compared to baselines, with only 2 augmentations per image (notably less than the 10x once thought to be a rule of thumb for training with synthetic data). Strengths: [S1] SaSPA seems well-motivated, rather than haphazardly throwing large language and generative models at the task, the paper presents a carefully crafted pipeline that addresses the fidelity issue with both strong conditions (such as the edge maps) and also filtering. [S2] The paper is mostly well-presented, easy to follow, with motivation, method, and results that flow well both narratively and logically. [S3] The experiments are, for the most part, very thorough. (that's not to say nothing is missing, see W1) This does not matter so much, but [S4] The differences between the proposed method and diff-mix are well-articulated, and SaSPA has some clear advantages to mitigate its disadvantages (such as not requiring additional training of the generative pipeline). Weaknesses: The results do not deliver on the promise of the paper's motivation. Principally, [W1] It is unclear if this approach can contribute to some state-of-the-art result. In some areas (e.g. text-to-image generation itself), asking SOTA of a research people might represent an unreasonable burden. However, in an area like FGVC, such evaluations are both practical and necessary. The authors should show that the performance gains on the ResNet50 (Table 1) don't vanish when applied to stronger methods (and I don't really count ResNet101 here, it's barely better than R50), particularly for CUB. Speaking of CUB, it seems from the teaser that this method may struggle to capture the most fine-grained differences, such as when it recolors the engine. Perhaps this is why the method does not fare so well with the birds dataset, or in other words, [W2] The method's inferiority to CutMix on CUB for 448x448 images is very concerning (Table 3). The higher resolution is the standard for FGVC, and a method that only works well for low resolution is quite limited in impact. Additionally, it calls into question whether the method is good mainly for images of rigid objects, or if it could work well for other datasets of living things (Stanford Dogs, Fungi, NABirds). On a more minor note, [W3] Some information seems misplaced. Image resolution is critical context in FGVC, especially if the paper uses smaller-than-normal images. The comparison in Table 3 should also compare M for diff-mix and the SaSPA. and [W4] Using CLIP for the filtering step seems strange. A model trained on the dataset (or even an ensemble) would likely filter the data much more reliably. Would this not be a much stronger approach? Technical Quality: 3 Clarity: 3 Questions for Authors: My weaknesses correspond to questions. [W1] - If this data augmentation is applied to a SOTA FGVC model, is the impact orthogonal, e.g., do I get a new SOTA? Or is this just another way to get the model to learn the same information as the existing tricks? [W2] - Does this method work well beyond Cars/Aircraft/DTD? Will it break for other datasets, similar to how it seems to break for CUB? [W4] - How was CLIP selected for the filtering step? I am currently leaning reject, mainly due to W1 and W2. For clarity's sake, I have similar concerns about diff-mix and would not necessarily have voted to accept, had I reviewed that paper. I say this just because W1 applies to diff-mix as well, and replying to my review by pointing this out will not assuage my concern. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and insightful feedback. We appreciate your recognition of SaSPA's well-motivated and carefully crafted pipeline, as well as your positive comments on the presentation and thoroughness of our experiments. Your constructive points will help us improve our work further. **“to achieve performance gains compared to baselines, with only 2 augmentations per image (notably less than the 10x once thought to be a rule of thumb for training with synthetic data).”** Thank you for highlighting this! We have now included this point in the paper. **“[W1] It is unclear if this approach can contribute to some state-of-the-art result. In some areas (e.g. text-to-image generation itself), asking SOTA of a research people might represent an unreasonable burden. However, in an area like FGVC, such evaluations are both practical and necessary. The authors should show that the performance gains on the ResNet50 (Table 1) don't vanish when applied to stronger methods (and I don't really count ResNet101 here, it's barely better than R50), particularly for CUB.”** - We'd like to clarify that our baseline architecture is not simply a standard ResNet, but rather the CAL [1] architecture, which is tailored for FGVC and was SoTA at the time of its publication. ResNet is used as a backbone in this architecture. - We have updated the paper to make this distinction clearer, preventing any potential confusion for future readers. - While CAL may not be the current SoTA for some datasets, it remains a strong and relevant baseline for FGVC tasks. For instance, MetaFormer [2], suggested by reviewer JRLS, shows mixed results when compared to CAL (MetaFormer is better than CAL on CUB but performs worse on Cars, with similar results on aircraft). - Nevertheless, we acknowledge the value of evaluating our method on more architectures, and we will have results for a more recent architecture for the camera-ready version of our paper. **“The method's inferiority to CutMix on CUB for 448x448 images is very concerning (Table 3). The higher resolution is the standard for FGVC, and a method that only works well for low resolution is quite limited in impact.”** Please refer to the global rebuttal, we have added experiments on high-resolution images. **“Additionally, it calls into question whether the method is good mainly for images of rigid objects, or if it could work well for other datasets of living things (Stanford Dogs, Fungi, NABirds).”** Please refer to the global rebuttal, we have added experiments on two new datasets: Dogs and Pet. **“[W3] Some information seems misplaced. Image resolution is critical context in FGVC, especially if the paper uses smaller-than-normal images. The comparison in Table 3 should also compare M for diff-mix and the SaSPA.”** - Regarding image resolution, we have updated our manuscript to make the resolution clearer. - Please also refer to the global rebuttal, we have added results for higher-resolution images as well. - Regarding the comparison in Table 3, generally speaking, we have two training schemes in our paper: (1) our training scheme and (2) the diff-mix training scheme. We opted to use the diff-mix training scheme when comparing to it because this provides a stronger comparison, favoring diff-mix as they trained and validated using their own training scheme and parameters. Hence, we cited diff-mix results from their paper rather than training using our pipeline, including for the few-shot scenarios. - We have used the same M for SaSPA as diff-mix used to ensure an apples-to-apples comparison. We updated the manuscript with this information. - It is important to note that diff-mix is a **concurrent paper**. According to the official guidelines, authors are not expected to compare to such works. Nevertheless, we chose to add this comparison for completeness. The methods are quite different, and we believe both are valuable for the research community. Unlike Diff-Mix, which uses fine-tuning for its generative model, our method does not rely on such heavy fine-tuning. We have made it clearer in the paper that diff-mix is a concurrent work, as this point may have been missed. **“[W4] Using CLIP for the filtering step seems strange. A model trained on the dataset (or even an ensemble) would likely filter the data much more reliably. Would this not be a much stronger approach?”** - Our methods uses 2 filtering methods: - (1) Semantic filtering, proposed by a recent work ALIA [3] to alleviate meta-class misrepresentation. Using **CLIP,** this process evaluates the relevance of generated images to the specific task at hand. For example, in a car dataset, each generated image is assessed against a variety of prompts such as “a photo of a car”, “a photo of an object”, “a photo of a scene”, “a photo”, and “a black photo”. Images that CLIP does not recognize as “a photo of a car” are excluded to ensure that the augmented dataset closely aligns with the target domain. - (2) Our top-10 filtering, which uses a **model trained on the dataset**, as you suggested. We made it clearer in the paper. - Note: Your intuition is correct! As we show in the paper (appendix F.2 Filtering Strategies), top-10 filtering is more important than semantic filtering, as evident in Table 19. [1] Rao, Yongming, et al. "Counterfactual attention learning for fine-grained visual categorization and re-identification." Proceedings of the IEEE/CVF international conference on computer vision. 2021. [2] Diao, Qishuai, et al. "Metaformer: A unified meta framework for fine-grained recognition." arXiv preprint arXiv:2203.02751 (2022). [3] Dunlap, Lisa, et al. "Diversify your vision datasets with automatic diffusion-based augmentation." Advances in neural information processing systems 36 (2023): 79024-79034. --- Rebuttal Comment 1.1: Title: Concerns Resolved Comment: All my weaknesses have been properly addressed. My apologies for the confusion on your Predictive Confidence Filtering (and corresponding F.2). As far as diff-mix goes, I realize now that I wasn't clear in my review, but I realize that it is concurrent work (that's why I mentioned it as minor, on both strengths and weaknesses). Presenting it isn't mandatory, but given that it has presented, preferably it would be presented properly (and it seems the main issue was corrected). As far as my evaluation of this work, and my rating, the comparison to diff-mix is only a positive, insofar as it reflects this work's thoroughness. I would suggest potentially adding a clearly marked limitations section to the final manuscript to address lingering issues such as less satisfying performance on high resolution CUB. --- Reply to Comment 1.1.1: Comment: We are happy that all your concerns are resolved. We will incorporate the clarifications, and add a clearly marked limitation section as you suggested.
Summary: The paper proposes a data-augmentation technique tailored to fine-grained image classification. The goal is to increase class fidelity while maintaining high variance in the images, something current diffusion-based data augmentation techniques are struggling with, especially in the fine-grained domain. The main idea is to preserve both the scene-structure (by conditioning on edges with controlnet) and subjects (by using blip diffusion). Rather than augmenting a single sample, the edge map, class subject reference image and prompt are sampled independently. An image filtering strategy is proposed to increase the data quality. The method doesn’t require any finetuning of the generative model. The method is shown to outperform baselines on 5 datasets and different scenarios such as full dataset training and few-shot classification are explored. Better generalization to unseen backgrounds compared to classifiers trained on the data augmentation baselines is also shown. Strengths: 1. The method builds on existing techniques, combining them in a novel way, which may lead to followup works 2. The method is well-contextualized within related work 3. The method is simple (easy to udnerstand and implement), novel and addresses an important problem of training deep learning models on domains with limited amount of training data, in particular for fine-grained visual classification 4. The paper is well written, easy to follow, with well-designed figures explaining the methodology 5. Extensive evaluation including many ablation expeirments justifying the design choices 6. Experiments are well described with all the necessary details such as information on hyper-parameter tuning, code will also be released for reproducibility Weaknesses: 1. The baseline classifier architecture is a standard image classifier. Meanwhile, there are architectures tailored to the fine-grained classification domain such as the MetaFormer [1, 2], it would be preferrable to also evaluate on state-of-the-art classifiers from the FGVC domain. 2. It is not clear how the method would perform on a dataset of objects from a less common domain (not so well represented in the training data of the diffusion models) See Questions for more details. Technical Quality: 3 Clarity: 4 Questions for Authors: ## General remarks 1. Code release is only mentioned in section 3.5, which is easy to miss, I would place it in abstract/introduction. 2. Even though it is proposed for FGVC, the method is in principle applicable elsewhere - have you tried applying it to standard image classification datasets? Is there a reason why you think the method would not work well? ## Add weakness 2 3. The evaluation datasets consist of very common objects/animals which are well-represented in the training data of diffusion models such as cars and birds. It would help understanding the strenghts and weaknesses of the method to see its performance on a dataset with less common meta-classes, for example the iNaturalist. **References**: [1] Diao, Qishuai, et al. "Metaformer: A unified meta framework for fine-grained recognition." arXiv preprint arXiv:2203.02751 (2022). [2] He, Ju, et al. "Transfg: A transformer architecture for fine-grained recognition." Proceedings of the AAAI conference on artificial intelligence. Vol. 36. No. 1. 2022. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations are adequately addressed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your recognition of our method's novelty and effectiveness in addressing training with limited data. We're glad you found the paper well-written and the evaluations helpful. Our code has already been released for reproducibility. **“The baseline classifier architecture is a standard image classifier. Meanwhile, there are architectures tailored to the fine-grained classification domain such as the MetaFormer [1, 2], it would be preferable to also evaluate on state-of-the-art classifiers from the FGVC domain.”** - We'd like to clarify that our baseline architecture is not simply a standard ResNet, but rather the CAL [1] architecture, which is tailored for FGVC and was SoTA at the time of its publication. ResNet is used as a backbone in this architecture. - We have updated the paper to make this distinction clearer, preventing any potential confusion for future readers. - While CAL may not be the current SOTA for some datasets, it remains a strong and relevant baseline for FGVC tasks. For instance, MetaFormer shows mixed results when compared to CAL (MetaFormer is better than CAL on CUB but performs worse on Cars, with similar results on aircraft). - Nevertheless, we acknowledge the value of evaluating our method on more architectures like MetaFormer, and we will have results for another architecture such as the MetaFormer for the camera-ready version of our paper. **“Code release is only mentioned in section 3.5, which is easy to miss, I would place it in abstract/introduction.”**. Thank you for the suggestion. We have moved the code release information from section 3.5 to the abstract/introduction to increase visibility. **“Even though it is proposed for FGVC, the method is in principle applicable elsewhere - have you tried applying it to standard image classification datasets? Is there a reason why you think the method would not work well?”** - Thank you for considering the broader applicability of our method. There is no inherent reason our method should not work well on standard image classification datasets. - We focused on FGVC after identifying a gap in the existing literature concerning the application of diffusion models as generative augmentation within this domain. FGVC poses unique challenges and opportunities due to the subtle distinctions between classes, the struggles of current generative models to create images with accurate subclass fidelity, and the scarcity of data. - We believe our method has the potential to improve standard image classification as well, given its strong performance in generating diverse and accurate images for fine-grained categories. While our current focus was on FGVC, we are excited about the possibility of extending our approach to more general datasets in future work. Hence, we have updated our future work section. **“The evaluation datasets consist of very common objects/animals which are well-represented in the training data of diffusion models such as cars and birds. It would help understanding the strenghts and weaknesses of the method to see its performance on a dataset with less common meta-classes, for example the iNaturalist.”** - We need to differentiate between two cases: - **Common Meta Classes with Uncommon Subclasses:** While the meta classes like airplanes and birds are common, the subclasses are not. For instance, generating in a text-to-image manner a Boeing 737-300 airplane can be challenging (see Figure 1 in the rebuttal PDF). - **Uncommon Meta Classes:** Some datasets we chose have uncommon meta classes. For example: - DTD (Describable Textures Dataset): Contains specific textures such as Blotchy, Honeycombed, and Laced, which are not very common. - CompCars: Includes images of car parts such as headlights and taillights, which are not commonly associated with the correct caption in datasets like LAION, that were used to train the generation models. We realize that evaluating on even more uncommon object datasets could further demonstrate the robustness of our method, and if the reviewer finds that it will help the exposition, we will have results for a uncommon dataset for the camera-ready version of our paper. [1] Rao, Yongming, et al. "Counterfactual attention learning for fine-grained visual categorization and re-identification." Proceedings of the IEEE/CVF international conference on computer vision. 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the response and your effort! My concerns have been mostly addressed and I am keeping my positive score. The only remaining one is that I am not convinced the DTD dataset results are enough to show the generalization of the method to metaclasses less frequently represented in the training data of diff. models. As mentioned in the original review, I think this is a well-written paper with solid contributions. To make the contributions stronger, I am missing positive results on more diverse datasets, for example, showing the broader applicability of the method outside of FGVC, or more datasets with less common objects, to increase my score. --- Reply to Comment 1.1.1: Comment: We are happy that most of your concerns are resolved, and we thank you for your positive feedback.
Summary: This paper presents SaSPA, a generative augmentation method specifically designed for FGVC. This method generates diverse, class-consistent synthetic images through conditioning on edge maps and subject representation. They use ControlNet condoned on edge maps and uses blip diffusion for its ability to zero-shot image generation. Their method is really smart and novel. They show results on Aircraft, Stanford Cars, CUB, DTD and Compcars for fine-grained evaluation. Strengths: 1) I found the methodology of this paper to be highly impressive. The approach is both intelligent and relatively straightforward, making it intuitively appealing. 2) Utilizing GPT-4 to generate captions, followed by employing ControlNet on edges to create images via BLIP Diffusion, is a particularly smart strategy. 3) Additionally, the analysis on mitigating contextual bias is excellent, providing insightful and valuable results. Weaknesses: 1) Datasets are of small scale on which the experiments have been done. I would expect this method to perform well on fine-grained evaluation of imagenet dogs setup as well. Showing experiments on 200 classes of dogs on ImageNet will make the paper even stronger. 2) How much is the performance improvement over methods like CLIP in few shot settings? Even if there is not much ( which is fair since here the aim is not image text training), it would be a good idea to show how behind this would be. 3) To be really honest, I believe the results are pretty incremental. In Table 3 the difference between Diff-Mix & SaSPA is 0.3%, which is quite small and is probably in the noise range. The method is really smart and novel and I think just using this method on small-scale experiments on fine-grained datasets is kind of underselling the paper. I believe the method has a lot of potential and can be used to do large-scale training. Especially using edge-based control net and blip diffusion is a really good idea. 4) I will urge the authors to rethink the experimental results and encourage them to show these results on larger datasets with more convincing results. 5) The Fig4 caption is really small and should be a bit more detailed. Technical Quality: 2 Clarity: 3 Questions for Authors: I think the paper needs more experimental results, but the image generation is really good. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate your recognition of the novelty of SaSPA. We're glad you found our strategy and analysis impressive and valuable. **“Datasets are of small scale on which the experiments have been done. I would expect this method to perform well on fine-grained evaluation of imagenet dogs setup as well. Showing experiments on 200 classes of dogs on ImageNet will make the paper even stronger.”** * Small-scale datasets are common in FGVC due to the infrequent appearance of specific objects in the real world and the subtle differences between sub-categories, which complicate labeling efforts [1]. This limited data availability is precisely what makes FGVC an interesting topic, as it highlights the challenge and importance of generating effective training data. * The Stanford Dogs dataset (also referred to as ImageNet Dogs) contains approximately 20,000 images across 120 classes. This is comparable in scale to the Stanford Cars dataset, which contains about 16,000 images across 196 classes. Thus, the scale of our current datasets is consistent with standard practices in FGVC research. * Nevertheless, to further address the concern and increase confidence in our method, we conducted an experiment on the Stanford Dogs dataset, as requested. The results are presented in Table 2 of the rebuttal PDF. Additionally, we included the Oxford-IIIT Pet Dataset in our evaluation. Both datasets showed improved results with SaSPA. **“How much is the performance improvement over methods like CLIP in few shot settings? Even if there is not much ( which is fair since here the aim is not image text training), it would be a good idea to show how behind this would be.”** The zero-shot accuracy of CLIP is included in Figure 2 of the rebuttal PDF. As shown, SaSPA consistently outperforms CLIP in all shots, including 4-shots, whereas other augmentation methods fall short for Cars and DTD. **“To be really honest, I believe the results are pretty incremental. In Table 3 the difference between Diff-Mix & SaSPA is 0.3%, which is quite small and is probably in the noise range. ”** * Our main results (Table 1, Table 2, Figure 4) show significant improvements over recent SoTA augmentation methods, both generative and non-generative, across various setups. * As for Table 3, diff-mix is a **concurrent paper**. ​​According to the official guidelines, authors are not expected to compare to such works. Nevertheless, we chose to add this comparison for completeness. The methods are quite different, and we believe both are valuable for the research community. Unlike Diff-Mix, which uses fine-tuning for its generative model, our method does not rely on such heavy fine-tuning. We have made it clearer in the paper that diff-mix is a concurrent work, as this point may have been missed. **“The method is really smart and novel and I think just using this method on small-scale experiments on fine-grained datasets is kind of underselling the paper. I believe the method has a lot of potential and can be used to do large-scale training. Especially using edge-based control net and blip diffusion is a really good idea”** Thanks for acknowledging the novelty of our method! **“I will urge the authors to rethink the experimental results and encourage them to show these results on larger datasets with more convincing results.”** See response to W1 and W3 above. We added new results. **“The Fig4 caption is really small and should be a bit more detailed.”** Thank you for the feedback. We have updated the caption for Figure 4 to be more detailed and easier to read. [1] Dunlap, Lisa, et al. "Diversify your vision datasets with automatic diffusion-based augmentation." Advances in neural information processing systems 36 (2023): 79024-79034.
null
null
Rebuttal 1: Rebuttal: We appreciate the reviewers' time, thoughtful comments, valuable suggestions, and their recognition of the potential positive impact of our method. Below, we address their common questions and concerns, in addition to the individual response per-review. In response to the feedback received, we have made several modifications to the paper and added new results, as requested. These changes have increased the clarity of our work and strengthened our findings even more. ## More Datasets As requested by some reviewers, we added evaluations on two new datasets: the Stanford Dogs dataset and the Oxford-IIIT Pet Dataset. * The Stanford Dogs dataset contains 20,580 images of 120 dog breeds from around the world, built using images and labels from ImageNet for fine-grained visual classification. We used 50% of this dataset due to time constraints. * The Oxford-IIIT Pet Dataset contains 7,349 images of 37 breeds (25 dog breeds and 12 cat breeds) and is also used for fine-grained visual classification. We used 100% of this dataset. * In Table 2 of the rebuttal PDF, we compare SaSPA with CAL-Aug, which is typically the strongest traditional augmentation in our experiments. * We observed that SaSPA results in an improvement for both datasets, further strengthening the findings in our paper regarding SaSPA as a generative augmentation method. * Combined with the results on DTD and CUB, we now have four datasets that are not of rigid objects, demonstrating that our method is effective beyond rigid objects (a concern raised by reviewer Z7K4) ## Will SaSPA Work on Higher Resolutions? We have conducted additional experiments using 448x448 resolution on CompCars, DTD, and CUB datasets, employing SaSPA and the best augmentation method for each dataset (as per Table 1 in our paper). Each experiment was repeated with two seeds. Table 1 in the rebuttal PDF presents the test accuracy for these high-resolution runs. Combined with our previous diff-mix comparison which used both 448x448 and 384x384 resolutions (Table 3 in the paper), we now have higher resolution results for all datasets. Key findings: * We observed consistent improvements across all datasets except CUB. * For CUB, we hypothesize that the very fine-grained details such as feather patterns and colors present a significant challenge for generative methods, making it difficult to generate accurate representations at higher resolutions. In conclusion, SaSPA demonstrates promising results for most datasets, affirming its overall benefits. Pdf: /pdf/01acb08d7ada9f46295a5c3a28e79c8e14aed723.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sample and Computationally Efficient Robust Learning of Gaussian Single-Index Models
Accept (poster)
Summary: This paper studies efficient parameter estimation from generalized linear models with adversarial noise (a.k.a. agnostic learning), assuming a known link function. The goal is to find a parameter vector that best explains the data. The authors show that $d^{\lceil k^*/2 \rceil}$ samples suffice for this problem with $k^*$ being the information exponent. The algorithm is given by certain projected gradient descent (w.r.t. loss truncated to degree $k^*$ Hermite polynomials) initialized with PCA for tensor unfolding. The sample complexity matches a known CSQ lower bound $d^{k^*/2}$ for even $k^*$ in the realizable setting (i.e., random noise). Strengths: NA Weaknesses: NA Technical Quality: 3 Clarity: 4 Questions for Authors: Major comments: 1. The work [DPVLB24] is compared in B.3, but I'm not sure it's comparable. That paper obtained the conjecturally optimal sample complexity $d^{\bar{k}^*/2}$ with $\bar{k}^*$ being the generative exponent which can be much smaller than $k^*$. The way I read [DPVLB24] is that if preprocessing is allowed, $d^{\bar{k}^*/2}$ can be achieved. Otherwise, we're stuck at $d^{{k}^*/2}$. In the current paper, no preprocessing is used. Could the authors comment on the possibility of reducing the sample complexity in the agnostic setting with preprocessing? Or the difficulty of applying a suitable preprocessing that brings down the effective information exponent? Note that the optimal preprocessing in [DPVLB24] relies on the link function. With additional work, they can make it work for misspecified models (a.k.a. unknown link function). But I'm not sure if this is possible with adversarial noise. 1. I would be cautious to claim that $d^{\lceil k^*/2 \rceil}$ **nearly** matches the CSQ lower bound $d^{k^* / 2}$ since there's a nontrivial gap in the exponent for odd $k^*$. This is inconsistent with the standard definition of "nearly" in say TCS. 1. Regarding the ceiling in the exponent, I suspect that this is a proof artifact since for tensor PCA, the ceiling in [RM14] can be removed with (much) more careful analysis https://arxiv.org/abs/2110.10210 . Minor comments: 1. I didn't check all the cited papers (in the intro and appendix B) but would like to invite the authors to caution the following terminology discrepancy. In classical stats, learning single-index models means learning the unknown link function. OTOH, for known link function with unknown parameter vector, the model is referred to as generalized linear models. There're also works consider learning the parameter vector from GLMs with unknown link function (as a nuisance parameter). However, it seems in recent years, people (especially deep learning theorists) have been interchangeably using SIM and GLM to refer to the above problems. Please make sure such discrepancies are taken into account when comparing prior works with the present result. 1. Line 63-66, I'm not familiar with the line of cited work on hardness, why is $d^{O(1/\epsilon)}$ regarded as hard? This seems to me a legitimate polynomial running time. Could the authors say a few more words about under which computational model (low-deg, SQ, SoS, AMP, etc) the problem is hard, and hard in what sense? 1. Line 72, what does $\sim$ mean? 1. Line 77, the condition $c_{k^*} = \Omega(1)$ doesn't make sense to me. I'm not sure what's the asymptotic in $\Omega(\cdot)$. I suppose $\sigma$ is a fixed function independent of $n,d$ (otherwise it again doesn't seem to make sense), so $c_{k^*}$ is either zero or strictly nonzero (note also that it can be negative). 1. Line 103, could the authors also specify the dependence of running time on $1/\epsilon$? 1. Paragraph after Thm 1.2: there's a huge difference between CSQ and SQ. Are there more formal reasons (e.g., SQ lower bound) to believe that agnostic learning GLM is still hard under SQ (beyond the fact that known SQ algorithms for realizable learning fail)? 1. Line 195, "small constant $\epsilon_0$" is mildly misleading -- $1/2$ doesn't seem to be a small constant... 1. End of line 200, there's a redundant 2 in the exponent. 1. Line 223, is $l$ defined to be $\lceil k/2 \rceil$ or $\lfloor k/2 \rfloor$? 1. Line 2 of Algorithm 2, the subroutine Initialization is undefined. I assume it refers to PCA for tensor unfolding. 1. Line 557 is somewhat confusing, do the authors mean that **for any** positive constant $c$, partial trace cannot achieve $\mathbf{w}^0 \cdot \mathbf{w}^* \ge c$? 1. Line 572, I'm not sure why $\mathbf{w}^*$ is desired to be the unique top eigenvector. Do the authors mean that $\mathbf{w}^*$ is an outlier top eigenvector (meaning there's an eigengap between top two eigenvalues)? 1. Line 581-583, I think the exact information theoretic threshold is known for tensor PCA: https://arxiv.org/abs/1812.03403 1. Line 590, I think [HSS15] studies SoS instead of tensor unfolding (correct me otherwise). For tensor unfolding, the correct reference is https://arxiv.org/abs/2110.10210 which confirms the $d^{(k-2)/4}$ conjecture of [RM14]. I'm not sure of the relevance of the bound $d^{k/4}$. 1. Line 591, I don't understand where the $O(d)$ bound for unfolding comes from. The suboptimal analysis of [RM14] already gives $d^{1/2}$. And again, the above reference shows that $d^{1/4}$ suffices for tensor unfolding (which is sharp for unfolding). I'm not sure the relevance of $d^{3/4}$ here. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort in reviewing our paper, as well as the positive evaluation of our work. We respond to the reviewer's comments below. #### Response to Questions ##### Question 1: (Q1) We agree that care is needed when comparing to [DPVLB24]. As the reviewer remarks, it is unclear whether their preprocessing ideas can be transferred to the agnostic model, for the following reasons. First, the noise model considered in [DPVLB24] **does not cover the agnostic noise** setting. In particular, [DPVLB24] requires the label $Y$ to be independent from $w\cdot x$ for any $w\perp w^*$, which excludes even some standard classes of structured semi-random noise such as the Massart noise. Thus, it is **unclear** whether the SQ lower bound obtained in [DPVLB24] is **tight** for the **agnostic** setting. Second, it is not clear how to apply the label transformation process $T(y) = E[he_{\bar{k^*}}(w^*\cdot x) | Y = y]$ in the **agnostic** setting. Since the distribution of $Y$ in **unknown**, we cannot compute this conditional expectation. Further, how to even approximate this function $T(y) = E[he_{\bar{k^*}}(w^*\cdot x) | Y = y]$ is unclear when the labels take real values. The difficulty of calculating or approximating this conditional expectation in the agnostic setting also poses challenges in transferring this measure of hardness to our setting. In particular, it is not clear how to calculate the generative component $\bar{k^*}$ uniformly over **all possible noise distributions**. Finally, we note that other data transform processes also **fail** in the **agnostic** setting. For example, the SQ algorithm proposed in [CM20] that applies an implicit data transformation process via a thresholding procedure, **provably fails in our setting**, as we have remarked in the introduction. That said, we do not claim that further reducing our sample complexity via other more sophisticated techniques is impossible, and we leave it as an interesting future direction. We further refer to our response to reviewer 9olo, weakness 2 and question 1, for additional comparison with [DPVLB24]. (Q2&Q3) Thank you for pointing this out; we will revise appropriately. We also thank the reviewer for the provided reference, which we will look into for possibly closing the gap mentioned by the reviewer. #### Minor comments: (C1) We thank the reviewer for pointing out the possible ambiguity in the terminology. We chose to be consistent with the terminology used in the existing line of work addressing the same activations, from [CM20] to [DPVLB24]. However, we will add a remark to disambiguate. (C2) We are interested in efficient algorithms with sample complexity and runtime that are polynomial in both $d$ and $1/\epsilon$, as $\epsilon$ is usually chosen as a small error tolerance. It was showin in [GGK20,DKZ20,DKPZ21] that for any SQ algorithm, agnostically learning ReLU (one of the most basic activations) to error $\mathrm{OPT}+\epsilon$ requires at least $d^{\Omega(\mathrm{poly}(1/\epsilon))}$ queries, which is not polynomial in $1/\epsilon$. Thus, this line of prior work rules out the possibility of having an SQ algorithm that achieves $\mathrm{OPT} +\epsilon$ error with $\mathrm{poly}(d, 1/\epsilon)$ complexity under our setting. Moreover, the same hardness result ($d^{\Omega(\mathrm{poly}(1/\epsilon))}$ complexity lower bound) holds for low-degree polynomial tests (using the near-equivalence between SQ and low-degree tests shown in [BBHLT21]) and via reduction-based computational hardness (cryptographic hardness) [DKR23]. (C3) In line 72, we use $\sim$ to denote that the function on both sides is equal in the Hermite orthonormal basis under Gaussian measure. Note that $\sigma(z)$ might not be equal to $\sum_{k\geq 0} c_k he_k(z)$ pointwise due to the Gibbs phenomenon of Fourier expansions. We will clarify. (C4) Thank you for catching this typo, it should be $|c_{k^*}| = \Omega(1)$, meaning that it is a universal constant, independent of other problem parameters like $d$ and $1/\epsilon$. (C5) It takes roughly $d^{k^*}$ time to read one $k^*$-Chow tensor, and thus it takes $\tilde{O}(d^{3\lceil k^*/2\rceil} + d^{k^*}/\epsilon)$ time to read all $k^*$-Chow tensors required for estimating the unfolding matrix. Since we only need to estimate the top singular vector to constant error, this can be done with a standard matrix SVD algorithm (like power iteration) in time no higher than reading the aforementioned tensors, up to constants. The optimization subroutine takes time $O(nT) = O(d\log(1/\epsilon))$. Thus, in summary, the total runtime of our algorithm would be $\tilde{O}(d^{3\lceil k^*/2\rceil} + d^{k^*}/\epsilon)$. (C6) If one is interested in error $\mathrm{OPT} +\epsilon$, such SQ lower bounds are known (as explained above). As elaborated in our response to reviewer 9olo, the SQ complexity of agnostically learning SIMs to error $O(\mathrm{OPT}) +\epsilon$ is not well-understood. Specifically, it is unclear if the SQ complexity is the same or significantly higher than that of the realizable setting. Fully characterizing the SQ complexity of the agnostic problem remains an interesting open problem. The main point is that our algorithm is the best known algorithm for this problem to date and almost matches the lower bound for a natural restricted class of algorithms (CSQ algorithms). (C9&10) In line 223, $l$ is defined as $\lfloor k/2 \rfloor$; in line 2 of algorithm 2, the initialization subroutine is the tensor PCA algorithm, i.e., algorithm 1 –– we will clarify this in the description of algorithm 2. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. The comments regarding preprocessing appear convincing to me and my overall evaluation remains unchanged. I have one more quick clarification question that's somewhat related to (C2), (C5). Should I think of $\epsilon$ as a constant or depending on $d$ (potentially arbitrarily)? --- Rebuttal 2: Comment: (C11) Our intention was to explan that even when $\mathrm{OPT}$ is very small, i.e, when $\mathrm{OPT}\gtrsim d^{-(k^* - 2)/2}$, we cannot use the partial trace algorithm to find a vector $w^0$ such that the inner product between $w^0$ and $w^*$ is greater than some positive absolute constant. Note that with a random initialization, one easily gets a vector $w^0$ such that $w^0\cdot w^* \approx 1/\sqrt{d}$ with high probability; however, this trivial alignment is not sufficient for our optimization subroutine to work. (C12) We want to point out that there are no significant eigengaps between the eigenvalue of $w^*$ and the eigenvalues of other eigenvectors that are orthogonal to $w^*$; hence, we cannot pick out the target vector $w^*$ from the eigenvectors of the partial-trace matrix efficiently. (C14&15) We apologize for the confusion in the prose of lines 587-594. We emphasize that in Appendix B.4 the definition of the signal strength $\tau$ is different from the definition of the signal strength $\beta$ in [RM14]. In particular, since [RM14] takes a normalization step, it holds that $\tau \approx \sqrt{d}\beta$, as remarked in footnote 1 in [HSS15]. Hence, since [RM14] requires $\beta\gtrsim \sqrt{d}$ for tensor-unfolding, it translates to $\tau\gtrsim d$ in our setting. We have added a footnote to remark on the relation between the definition of $\beta$ in [RM14] and the definition of $\tau$ in our setting, and we have modified lines 587-594 to clarify this. Reference: [BBHLT21]Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent, Matthew Brennan, Guy Bresler, Samuel B. Hopkins, Jerry Li, Tselil Schramm COLT 2021
Summary: This paper studies the problem of _agnostic learning_ of single-index models, which consists of trying to learn a distribution $\mathcal D$ on $\mathbb R^d \times \mathbb R$ with an estimator of the form $y = \sigma(\langle w, x \rangle)$. Since the problem is not necessarily well-posed, the goal is to reach a performance of at most $C \cdot OPT + \epsilon$, where $OPT$ is the best possible performance. The authors present an algorithm that achieves this bound with a sample complexity of $d^{\lceil k^\star/2 \rceil} + d/\epsilon$, where $k^\star$ is the _information exponent_ of the function $\sigma$. This algorithm proceeds in two steps: the first obtains an informed initialization of the optimal vector $w^\star$ through a tensor unfolding method, which is then passed through an SGD algorithm to obtain the final estimator. Strengths: I found the topic of the paper very interesting; I was mostly familiar with the realizable setting, and the adversarial version provides interesting insights and challenges. I particularly appreciated Appendix B, which compares in-depth the present work with previous papers on both the realizable and agnostic settings. The results encompass a wide class of link functions, and make no assumption on the data distribution at all (apart from Gaussian marginals). The arguments to handle such a diverse class of problems are conceptually interesting, and (as far as I checked -- see `Weaknesses`) correct. Weaknesses: Apart from a few qualms with the presentation (see the minor remarks), my main issue with the paper is simply its length. With the short review time and high review count, it is strictly impossible to certify that a proof which consists of 20-25 pages of dense math, with many computation-heavy steps, is correct. I did check that the proof outline and the main steps seem correct, but it is my opinion that such a paper is incompatible with the Neurips review format, and should be submitted to a journal instead. My recommendation for acceptance, based on the overall quality of the paper, should be taken with such a caveat. Minor remarks: - the vector $w^\star$ is never rigorously defined; for the sake of exposition, the link between solving Problem 1.1 and obtaining good alignment with $w^\star$ should be emphasized. - In Assumption 1, the wording is confusing: since $\sigma$ is fixed, what is the $c_k^\star = \Omega(1)$ referring to ? My best guess is that the results hold uniformly over a class of link functions where $c_k^\star$ is bounded from below, and $C_k^\star$ and $B_4$ from above, but this should be made apparent if this is the case. - there should be a $w^t$ in the RHS of eq. (16). - You mention in the introduction that achieving $OPT+\epsilon$ performance is likely to be hard, but it's difficult to parse why the proof wouldn't extend in this case: this seems hidden in the recursion of Theorem 3.5. Highlighting this barrier would be a nice addition to the proof. Technical Quality: 3 Clarity: 3 Questions for Authors: - In papers on the realizable setting (e.g. [DNGL23, DPVLB24]), the difficulty of the problem is not measured in terms of the link function $\sigma$, but the distribution of $(x, y)$ (assuming that $\sigma$ is ``nice''). Can such a measure be defined in your setting, instead of considering the worst-case scenario on $(x, y)$ ? - On a related note, in the aforementioned setting, a mismatch between $\sigma$ and the ``true'' link function can lead to a value of $OPT$ of order $\Omega(1)$, but some algorithms manage to achieve almost perfect alignment with the vector $w^\star$. In your case, a large value of $OPT$ yields vacuous bounds on both the performance of your algorithm and its alignment with $w^\star$; is there an explanation of this phenomenon ? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort in reviewing our paper and the positive assessment. Below, we provide specific responses to the points and questions raised by the reviewer. >(Question 1): Can such a measure be defined in your setting, instead of considering the worst-case scenario on (x,y)? The prior work [DPVLB24] considered the generative exponent $\bar{k}^*$ of the label $y$, which is defined as the first non-zero term $E_y[(E_x[he_k(w^*\cdot x)|y])^2]$. It is not clear how to leverage such a measure for the agnostic setting for a number of reasons, including the fact that the "noise" in our setting is **unknown** to the algorithm and that the labels may take real values (in which case it is unclear how to estimate the conditional expectation stated above). > (Question 2): a mismatch between 𝜎 and the ``true'' link function can lead to a value of 𝑂𝑃𝑇 of order Ω(1), but some algorithms manage to achieve almost perfect alignment with the vector $w^*$. In your case, a large value of 𝑂𝑃𝑇 yields vacuous bounds on both the performance of your algorithm and its alignment with $w^*$; is there an explanation of this phenomenon ? This is indeed the case and it is a consequence of the **agnostic** model. Specifically, in the presence of agnostic noise, it is not possible to achieve perfect recovery for the target vector $w^*$ (i.e., find a $w$ such that $w\cdot w^*\geq 1 - \epsilon$ for any desired accuracy $\epsilon>0$) -- even information-theoretically. Specifically, one can construct examples where two different weight vectors both achieve the optimal error while being far from each other. This is a fairly standard fact in agnostic learning and holds even for simple activations like ReLU. In the **realizable** setting, the reason that one can recover the target vector $w^*$ almost perfectly is that the distribution of the label $y$ is **known**, and hence it is possible to manipulate the labels to gain more information about $w^*$. >(Minor Remark 1) the vector $w^*$ is never rigorously defined We thank the reviewer for pointing this out. We clarify that $w^*\in\mathrm{argmin}_{w\in\mathbb{S}^{d-1}} \mathcal{L}_2^{\sigma}(w)$; i.e., $w^*$ is defined as a vector that achieves the minimum $L_2^2$ loss. We have added this definition to the main body. > (Minor Remark 2) My best guess is that the results hold uniformly over a class of link functions where $c_{k^*}$ is bounded from below, and $C_{k^*}$ and $B_4$ from above The reviewer’s understanding of assumption 1 is correct. We further clarify that the activation $\sigma$ is fixed, but it needs to satisfy assumption 1 so that our algorithm achieves $C\cdot \mathrm{OPT}+\epsilon$ error, where $C$ is an absolute constant, using the sample complexity and runtime as claimed in Theorem 1.2. The constant $C$ in the error depends on the parameter $C_{k^*}$, therefore, if $C_{k^*}$ is not an absolute constant, then our algorithm does not achieve constant factor approximate error. The parameters $c_{k^*}$ and $B_4$ appear in the sample complexity. This implies that if $c_{k^*}$ and $B_4$ are not independent of $d$, the sample complexity and runtime might not be of order $O(d^{\lceil k^*/2 \rceil} + d/\epsilon)$. >(Minor Remark 3): You mention in the introduction that achieving 𝑂𝑃𝑇+𝜖 performance is likely to be hard, but it's difficult to parse why the proof wouldn't extend in this case: this seems hidden in the recursion of Theorem 3.5. Highlighting this barrier would be a nice addition to the proof. Thank you for the suggestion; we'll incorporate it. First, we note (as stated in the paper) that there exist both SQ lower bounds and reduction-based hardness results implying that achieving error $\mathrm{OPT}+\epsilon$ requires $d^{\mathrm{poly}(1/\epsilon)}$ time, even for a ReLU activation. To see why a constant factor approximation is inherent in our algorithmic approach, we refer to the Technical Overview section (lines 176-185). The main technical reason is that the sharpness structural does not hold on the whole sphere –– it is only valid on the sphere excluding a spherical cap centered at the target vector $w^*$. Specifically, by Lemma 3.3, we have sharpness on the subset $\mathbb{S}^{d-1} \setminus \mathcal{S}$, where $\mathcal{S} = \lbrace w: || w || = 1, \sin(\angle(w, w^*)) \leq 4e\sqrt{\mathrm{OPT}} \rbrace$. This restriction of sharpness is due to the strong agnostic noise. Since every point in the spherical cap $\mathcal{S}$ is a $O(C_{k^*}\mathrm{OPT})+\epsilon$ accurate solution (Claim E.7), we can terminate the algorithm after entering this spherical cap if we are only looking for constant factor approximate solutions. However, since sharpness no longer holds once the algorithm’s iterates enter this spherical cap $\mathcal{S}$, we lose the information about the direction in which we should update $w$; hence, we cannot continue decreasing the error to $\mathrm{OPT} + \epsilon$.
Summary: The paper provides a polynomial-time algorithm reaching the optimal CSQ sample complexity (upto sub-leading factors) for a general class of single-index models under the setup of adversarial noise. Similar to existing works, the algorithm utilizes the empirical estimate of the k_th Hermite tensor of the target to obtain an initialization with a non-trivial overlap (weak-recovery) by estimating the top singular vector of an unfolded tensor. Subsequently, the algorithm utilizes mini-batch SGD on the sphere starting from non-trivial overlap to reach the optimal error upto a constant factor. Strengths: The paper makes novel technical contributions towards understanding the sample-complexity of learning under the adversarial noise. Understanding the differences between the adversarial and realizable/random noise settings is crucial towards justifying applicability of CSQ and SQ lower-bounds in typical maching learning setups. The paper adequetly describes the technical challenges in matrix-concentration under adversarial noise as well as why the partial trace algorithm in Damian et al. (2024) cannot be directly applied under the adversarial noise setting (Appendix B.3). Weaknesses: * The central ideas of obtaining weak-recovery by estimating the Hermite tensor and subsequently running minibatch SGD is largely similar to existing works (Biroli et al. (2020), Damian et al. (2023), Damian et al. (2024), Chen and Meka (2020)). Therefore, the major contribution of the work are adapting the ideas and analysis in these papers to adversarial noise, which has some technical but limited conceptual novelty. Even under adversarial noise, prior works have establishing similar guarantees for smaller classes of link functions. It is unclear whether the setup of adversarial noise is relevant for gradient based training in machine learning models. Discussion of the motivation behind studying the adversarial noise and the novel conceptual contributions/implications of the work will improve the paper. * In the realizable and random-noise case, recent works Dandi et al. (2024) and Damian et al. (2024) show the learnability of multi/single index models under reduced SQ complexity through reuse of data (implicitly transforming the labels) and explicit transformation of labels respectively. In light of these works, it has become apparent to the community that the SQ class is more suitable towards describing the limitations of gradient-based learning rather CSQ. Therefore, it is important for the paper to justify the choice of the CSQ class and discuss the effect of data reuse, label transformation. The paper presently only discusses the limitation of some SQ algorithms such as the one in Damian et al. (2024) under adversarial noise. This alone doesn’t justify the relevance of CSQ lower-bounds under the possibility of label transformation in the adversarial noise case. Missing references of closely related works: - Biroli, Giulio, Chiara Cammarota, and Federico Ricci-Tersenghi. "How to iron out rough landscapes and get optimal performances: averaged gradient descent and its application to tensor PCA." Journal of Physics A: Mathematical and Theoretical 53.17 (2020): 174003. - Dandi, Y., Krzakala, F., Loureiro, B., Pesce, L., & Stephan, L. (2023). How two-layer neural networks learn, one (giant) step at a time. arXiv preprint arXiv:2305.18270. - Dandi, Y., Troiani, E., Arnaboldi, L., Pesce, L., Zdeborova, L., & Krzakala, F. The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents. In Forty-first International Conference on Machine Learning. Technical Quality: 3 Clarity: 2 Questions for Authors: * Is the proposed algorithm expected to reach the SQ lower-bounds upon transformation of the labels? * Intuitively, why does unfolding fare better than the partial trace estimator under adversarial noise? * Can the memory and runtime requirements for Algorithm 1 be optimized simlar to the partial trace algorithm (see Remark 4.2 in Damian et al. 2024)? * The discussion in lines 158-161 regarding the Gaussianity of the noise term in prior works isn’t accurate since Damian et al. 2024 also consider arbitrary labels leading to general noise. * Why using the tensor PCA for a star while we know there are more efficient starting point for such problems, e.g. (https://arxiv.org/abs/1708.05932, arXiv:1811.04420,https://arxiv.org/abs/2012.04524) ? * Correction: The references list "G. B. Arous" instead of "G. Ben Arous" (Ben Arous is the surname, not a second name). This should be corrected to accurately reflect the author's name. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The work precisely describes the theoretical assumptions. One major limitation of the work is the absence of a discussion of label transformations/batch reuse (see weaknesses above). The work is primarily of a theoretical nature and has no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **General Response to Reviewer 9oLo** We thank the reviewer for the provided feedback. Before responding to specific questions and comments, we address two main points, which we believe are the main sources of the reviewer's somewhat negative view of our work. We hope that upon clarifying the context, the reviewer would consider reevaluating our work. First, our work is motivated by the line of work on **agnostic learning** of single-index models under structured marginal distributions and broad classes of activations, as discussed in the top-level response. Agnostic learning, introduced in [Hau92, KSS94], is a well-established model meant to capture **realistic** learning settings, where we do not assume that the labels perfectly follow a model from the class (where "perfect" also accounts for, e.g., zero-mean noise), and the goal is to be competitive with the best-fit model from the class. The line of work on agnostic learning has a long history in the learning community, with papers regularly published at top ML theory venues such as COLT, NeurIPS, and ICML over the past three decades. The aforementioned line of work on agnostically learning SIMs (see the top-level response) elaborates on the difficulties of polynomial sample and time learning, and develops the first **efficient** and **constant-factor** **agnostic** algorithms for **monotone** activations. While these algorithms rely on first-order methods, they do not rely on “vanilla SGD on the square loss” and their analyses are fairly sophisticated. Second, it is important to note here that our focus was **not on the CSQ model** itself or on the optimality of our sample complexity for **all SQ algorithms**. It is a plausible conjecture that the sample complexity of our algorithm can be improved (by an appropriate label transformation preprocessing or some other method) and this remains an interesting open question for future work. As we explain below, the applicability of existing such approaches is **unclear**. Importantly, our main result is **the first** constant-factor agnostic learner with polynomial sample and time complexity addressing the broad class of activations defined via Hermite polynomials. It is also **the most sample and computationally efficient algorithm** for this task known to date — not only within the class of first order/SGD-type algorithms, but **in general**. --- Below we address specific comments and questions raised by the reviewer. #### Weakness 1 We respectfully disagree with the reviewer’s points. As is explained in the submission and we reiterate below, there is a vast difference between the **realizable/random noise** setting and the challenging **agnostic** setting that we study (both in terms of algorithms and analysis). Conceptually, it is important to recall that our goal is to obtain error $O(\mathrm{OPT})+\epsilon$, which is the best possible error achievable in polynomial time (please see our general response). None of the aforementioned works achieves such a guarantee, even for very special cases, e.g., for a ReLU activation. (That is, the algorithms themselves in these prior works do not suffice; not merely the analyses.) At the technical-level, our approach also differs significantly from these prior works. Specifically, our algorithm has two main components: an initialization subroutine and an optimization algorithm. Both are new and require novel analysis. To obtain a non-trivial weak recovery in the initialization step, we carried out a **fine-grained** analysis of a $k$-tensor-PCA algorithm in the presence of agnostic noise. The optimization algorithm is different from these prior works as well and its analysis hinges upon a critical **structural result** for the $L_2^2$ loss, which we term ‘(alignment) sharpness’ (Lemma 3.3). In more detail, we show that the Riemannian gradient $\mathbf{g}$ of the $L_2^2$ loss of the *truncated activation* contains a **strong signal** in the direction of $w^*$: $\mathbf{g} \cdot w^* \leq -\mu\sin^2(\theta)$, where $\mu$ is an absolute constant. Intuitively, this structural result conveys that the gradient vector $\mathbf{g}$ can *pull* the algorithm iterates towards $O(\mathrm{OPT})+\epsilon$ solutions. **None** of the prior works listed by the reviewer established such a structural result, which is **key** to obtaining a constant factor approximation. We refer to Section 1.2 (Technical Overview, line 176-192) for a more detailed discussion. >It is unclear whether the setup of adversarial noise is relevant for gradient based training in machine learning models [...] As noted in our general response, the agnostic model is **fundamental** in learning theory and has been **extensively studied** in the context of learning SIMs with monotone activations. From a practical perspective, the realizable or random label noise settings are often unrealistic, as they posit the existence of a model that perfectly fits the data (possibly on average). #### Weakness 2 As noted in our general response, the focus of our work is **not on the CSQ vs SQ** distinction. We provide **the first polynomial-time algorithm** for our problem that achieves near-optimal error of $O(\mathrm{OPT})+\epsilon$; a goal **not achieved by these prior works** (with any polynomial sample complexity). Importantly, when one talks about "SQ complexity" in the agnostic model, the accuracy achieved by the algorithm is **critical**. Please see below as a response to the reviewer's relevant question. --- Rebuttal 2: Comment: #### Response to Questions (Q1) It is important to note that the SQ complexity of the learning problem in the **agnostic** setting is **not necessarily the same** as the SQ complexity in the **realizable or random noise** cases. Specifically, the SQ complexity in the agnostic setting **depends on the desired accuracy**. For example, for accuracy $\mathrm{OPT}+\epsilon$, the SQ complexity of agnostically learning a ReLU (corresponding to $k^{\ast}=1$) under Gaussian marginals is known to be $d^{\mathrm{poly}(1/\epsilon)}$ -- i.e., **exponential** in $1/\epsilon$ [GGK20,DKZ20,DKPZ21]. In contrast, in the realizable/random noise setting, the SQ complexity is $\mathrm{poly}(d/\epsilon)$, as long as $k^{\ast}$ is bounded by an absolute constant. While an SQ *lower bound* for the realizable setting also applies to the agnostic setting, it is **unclear if a matching SQ upper bound exists**. In particular, it remains **an open problem** what the SQ complexity of our agnostic problem is for error $O(\mathrm{OPT})+\epsilon$. At a more technical level, it is not clear whether the approaches of [DPVLB24, DTA+24] can be leveraged in the agnostic setting to achieve a constant factor approximation. Regarding [DPVLB24], as explained in our submission, Section B.3 line 536-547, the generative exponent defined there is **specific to the joint distribution** $P(x,y)$ (in their notation), where the labels $y$ are corrupted by **structured and known** noise and this is **significantly weaker** than the agnostic model. Notably, they require the noise $\xi$ to be independent of $w\cdot x$ for any $w \perp w^*,$ which **excludes** even, for instance, Massart noise. Regarding the result in [DTA+24], it is unclear that it suffices **even for weak recovery** under the **agnostic** setting. The reason is that, as shown in [LOSW24], the sample-reuse method implements **monomial transformation** on the labels $y$, but in the **agnostic** setting the labels are **not guaranteed to have bounded higher moments**. (Q2) At a high level, applying the partial trace operator to a tensor can be viewed as **smoothing the tensor PCA objective** (see [ADGM17]). While in the **realizable** setting, smoothing the objective **helps** the optimization algorithm escape the local minima near the equator, in the **agnostic** setting, smoothing the landscape could also **bury the signal** that is already very weak. In particular, the partial trace operator sums up many entries of the noise tensor. Due to the agnostic nature of the noise, this unfortunately further **corrupts** the labels and makes it harder to discover the signal of the target vector $w^*$ from the tensor. (Q3) We think this might be possible using the special structure of the Hermite tensors. However, this is beyond the scope of our work, and we leave it as a future direction. (Q4) First, the label noise handled by Damian et al. 2024 is **not arbitrary**; please see the second paragraph in our response to the first question. Additionally, we would like to clarify that here we are referring to the traditional tensor PCA methods (provided in [RM14]), which rely on the Gaussianity of the noise tensor. We emphasize that the analysis and guarantees of traditional tensor PCA methods, like the tensor unfolding or partial trace, **cannot be directly applied to our setting**, and a fine-grained analysis is required. To avoid confusion, we have modified the phrasing in line 158-161. (Q5) We respectfully disagree with the reviewer’s comment. First, we are not sure what ‘such problems’ in the reviewer’s question refers to. The articles the reviewer listed provide matrix spectrum methods for phase retrieval problems, which address **one specific link function** with **information exponent** $k^* = 2$. In our work, we require algorithms that can **weakly recover** the signal for **much more general link functions** with **any constant information exponent** $k^*$ from a high dimensional tensor. Furthermore, the methods the reviewer points to are **not** qualitatively more efficient compared to our initialization method. Specifically, they require $n\propto d$ samples **asymptotically**. This is the same as the sample complexity of our initialization algorithm, which requires $n = O(d)$ samples **non-asymptotically** for phase retrieval problems, as $k^* = 2$ in this case. In fact, when $k^* =2$, our initialization algorithm is simply a Chow-matrix PCA algorithm that can be carried out efficiently using power iteration. References: [DTA+24] Y. Dandi, E. Troiani, L. Arnaboldi, L. Pesce, L. Zdeborova, and F. Krzakala. The benefits of reusing batches for gradient descent in two-layer networks: Breaking the curse of information and leap exponents. In Forty-first International Conference on Machine Learning, 2024. [LOSW24] J. D. Lee, K. Oko, T.i Suzuki, D. Wu. Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit. https://arxiv.org/abs/2406.01581. --- Rebuttal Comment 2.1: Title: Your answer Q5 is not correct Comment: With respect to Q5, your statements are uncorrected. The papers I mentioned solve ALL problems with generative (and not information, as you claim) exponent up to two. This means that virtually any link function (except made-up, carefully fine-tuned ones) are covered. See, for instance, the discussion on page 5 in the first paragraph of https://arxiv.org/pdf/2403.05529: "In fact, [MM18, BKM+19, MLKZ20] give a necessary and sufficient condition on P that enables such T to lower the information exponent to 2." I invite the authors to check theorem 1 and theorem 2 in [MM18] https://arxiv.org/abs/1708.05932, which are for "generic sensing models," (this is also the case of [BKM+19, MLKZ20], phase retrieval is just an a particular application). The authors can also check https://arxiv.org/abs/2012.04524, section 1.2 Main results, eq(5) for optimal transformation valid for almost any link function. The ONLY functions not covered in these works are those with generative (again, not information) exponent 3 and higher (see again https://arxiv.org/pdf/2403.05529), a class that essentially consists of fine-tuned functions, corresponding to unnatural, made-up problems, without much application for generic single-index exponents. To quote your answer, all these papers do study "algorithms that can weakly recover the signal for much more general link functions with any constant information exponent $k^*$" (in fact they do so for arbitrary k^* with just O(d) samples and O(dlogd) iterations) and I thus invite the authors to check these papers carefully. --- Reply to Comment 2.1.1: Comment: We reiterate that our work provides guarantees in the *agnostic model*, where the labels $y$ can be arbitrarily corrupted. Consequently, a function of the form $f(x)= E[y|x]$ is *unknown* to the learner and cannot even be estimated efficiently (as the $x$-marginal follows a standard normal, which makes it impossible to sample the same point twice). The works cited by the reviewer give necessary conditions to lower the information exponent down to $2$, assuming that the distribution of $y$ is *known* to the learner. We invite the reviewer to check the statement of Theorem 2 in [MM18], and in particular how the function $T(y)$ is defined. In summary, the algorithms given in these works do not succeed in our corruption model. Moreover, even assuming the distribution of $y$ is known to the learner (in which case the above theorem would be applicable), there exist function/distribution pairs where the information exponent=generating exponent>2 (see for example [MM18] and Figure 2 in [DPVLB24]). For such instances, the aforementioned works do not achieve the stated results. In summary, even for the easier setting where the learner knows the distribution of $y$ a priori, the above works do not in general achieve improved sample complexity for our setting. The reviewer argues that “virtually any link function (except made-up, carefully fine-tuned ones)” has a generative exponent up to $2$. We respond in two parts. First, our work is theoretical, establishing algorithmic results under a clearly defined set of assumptions. If we strengthen these assumptions (e.g., assume a generative exponent at most $2$), potentially more efficient results are possible (but even that remains open in the agnostic model). We emphasize however that this is formally a special case of our setting. Second, from the practicality perspective, one can construct natural examples where information exponent=generating exponent>2. In particular, this holds if the distribution of $y$ is supported in ${\pm 1}$, in which case the two exponents are identical (because in this case SQ and CSQ are known to be equivalent). Note that the distribution of $y$ can be Boolean-valued even if the link function is real-valued (e.g., a sigmoid).
null
null
Rebuttal 1: Rebuttal: **Top-Level Response** We thank the reviewers for their time and effort invested in evaluating our paper. We are encouraged by reviewers finding our results **technically novel** (**9olo**), **conceptually interesting** (**wBuC**), and rating the **presentation** of our work as **excellent** (**z8KF**). Below, we restate our motivation and main contributions, along the way hoping to clarify the context of our work. Specific comments from the reviews are addressed in individual responses. We look forward to the opportunity to respond to further questions and engage in a discussion with the reviewers. #### Motivation & Context The main motivation of our work is to develop polynomial-sample and time algorithms for **robustly** learning SIMs when the activation function is **not necessarily monotone**. "Robust" here refers to the well-established **agnostic learning** setting, where the labels do not necessarily correspond to any model from the class (i.e., there is no "perfect" model), and the goal is to be competitive with the *best-fit* model. Prior work [DGK+20, DKTZ22, ATV23, WZDD23, GGKS23, ZWDD24] has developed such efficient algorithms, albeit **restricted to a subclass of monotone functions**. Obtaining similar results for more general activations, namely for monotone and Lipschitz functions (a **subset** of the activations we handle) was explicitly stated as an open problem in [ZWDD24]. A **major goal** in the agnostic setting is to obtain as high accuracy as possible in polynomial time. The information-theoretically optimal error is $\mathrm{OPT}+\epsilon$ where $\mathrm{OPT}$ is defined as the minimum mean square loss that is attainable by any function in the target class. However, obtaining such an error guarantee requires SQ complexity $d^{\mathrm{poly}(1/\epsilon)}$. The **best error we can hope for** with SQ complexity $\mathrm{poly}(d/\epsilon)$ is $C*\mathrm{OPT}+\epsilon$, where $C>1$ is a **universal constant** independent of the problem dimension. Obtaining such an error guarantee in polynomial time is **highly non-trivial**. Standard SGD-based algorithms and/or their analyses **inherently fail** to achieve this error even for the basic case of a ReLU activation, as discussed, for example, in the introduction of [WZDD23]. Specifically, even with a tight analysis, the best possible parameter $C$ for such methods would scale polynomially either with the **dimension** or the **diameter** of the space; and the dependence on $\mathrm{OPT}$ would **not necessarily** be **linear** (e.g., scaling with $\mathrm{OPT}^{1/2}$). The class of activations we consider is the same as considered in the prior work of [DNGL23], also studied in [DPVLB24] (and other works), generalizing the monotone activations in the line of work mentioned above. We emphasize that the algorithms appearing in [DNGL23, DPVLB24] do **not** achieve the desired $O(\mathrm{OPT})+\epsilon$ error guarantee in the agnostic model, even restricted to the case of a ReLU activation (which corresponds to the information exponent of $k^*=1$). #### Main Contribution Our main contribution is **the first polynomial-time algorithm** that achieves the $O(\mathrm{OPT}) + \epsilon$ error guarantee for any activation in the aforementioned broad class, with sample complexity scaling with $d^{\lceil k^*/2\rceil},$ where $k^*$ is the information exponent characterizing activations in the class. For small constant values of $k^*$, this yields **the first polynomial sample and time agnostic learner** that goes well beyond the monotone case. Prior work does not achieve this error guarantee **even when restricted to** $k^*=1$.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences
Accept (spotlight)
Summary: The paper introduces Adaptive Randomized Smoothing (ARS), an innovative theoretical extension of Randomized Smoothing (RS) based on f-Differential Privacy (f-DP) to enhance the robustness certification of machine learning models. Empirical results demonstrate that ARS surpasses both standard RS and its variants in robustness certification metrics. The authors present a solid theoretical framework and conduct thorough experimental evaluations. Despite ARS inheriting certain inherent constraints of RS, the method represents a significant advancement, offering a fresh approach to augment RS through budget decomposition. Strengths: 1. The paper is well-written and easy to follow. 2. The theoretical underpinnings are solid: The integration of RS with f-DP and the subsequent decomposition leveraging f-DP properties is a creative and compelling strategy to refine RS. 3. The implementation of a masking mechanism is judiciously chosen to tackle the dimensionality challenges highlighted by the theoretical discourse. Weaknesses: 1. As an enhancement of RS, ARS is subject to the same fundamental limitations, notably the requirement for a substantial sample size to attain satisfactory robustness certification. The authors acknowledge this constraint within the limitations section of the paper. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Does the theoretical framework extend to alternative norm types, such as L1 or L2 norms? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. We are happy to answer the question about alternative norms. > Does the theoretical framework extend to alternative norm types, such as L1 or L2 norms? While the ARS theory (f-DP based RS and composition) applies to other norms such as L1 or L2, our specific instantiation using masking applies only to 𝐿∞ robustness: For L2: masking under the Gaussian mechanism implies no gains for L2 robustness directly. In particular, the reduction in the noise variance in the second step (Eq 6) is valid only against 𝐿∞ adversaries, and we would not be able to reduce noise this way for an L2 adversary (this is discussed in remark 3, ll. 177-181). Intuitively, this is because in the worst case, an L2 attack could be fully applied to pixels with a mask weight of 1, and so masking does not trigger noise reductions and our approach reduces to standard RS with two steps averaging the noise. This is a core reason why we do not present L2 results. For L1: bounding L1 with L2 (as we do between 𝐿∞ and L2 in the paper) does not involve the input dimension d. As a result, masking also doesn’t enable noise reduction by dimensionality reduction. That being said, it is plausible that by using multi-step composition, one could design architectures that improve robustness under other norms such as L1, L2, or L0. A potential avenue of research for L2 (that may apply to other norms as well) is to leverage the f-DP analysis of the subsampled Gaussian mechanism: https://arxiv.org/abs/1911.11607. This however would require significant design and analysis work, which is not yet explored, and in this submission we leave it for future work. --- Rebuttal Comment 1.1: Comment: I greatly appreciate your discussion regarding the potential for extension to other norms. I think it would be most beneficial to include this revised discussion in the next version of your manuscript. I will keep my score for recommendation.
Summary: This work presents the first sound composition of randomized smoothing. Based on the novel theoretical results, this work presents the first sound way to compose a mask generator with the Gaussian sampling required for randomized smoothing, thereby reducing the effective dimension and improves the certification of randomized smoothing. This work opens up the possibility of more complex compositions for randomized smoothing, which is attempted before without soundness and great success. To this end, this work serves a good milestone in the field of randomized smoothing. Strengths: This paper presents a great theoretical contribution to randomized smoothing, namely the composition rule. I did not find evident mistakes in the proof, but others might spot issues. If the theorems presented are correct, I recommend immediate acceptance, regardless of the experimental inadequacy that I will elaborate in the weaknesses section. The experimental section presents three different benchmarks, characterizing different aspects. The results show the advantage of the proposed method. The $L_\infty$ robustness of randomized smoothing is improved and depend less on the data dimension. Weaknesses: The main inadequacy I spot is its experimental design, specifically Section 4.3. While Section 4.1 and 4.2 are interesting and validates some of the claims made, Section 4.3 presents the main benchmark in the field of randomized smoothing. However, the authors only present the results on ImageNet with $\sigma=0.5$, while the main practice in the feild is to report results on \{cifar-10, ImageNet\} times \{$\sigma=0.25, 0.5, 1.0$\}. I request the authors to present the full benchmark for comparison. In addition, to compare with Cohen et al., the masking trick could also be applied to certiify $L_2$ robustness. It is not clear why the authors only report $L_\infty$ robustness in the paper. Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 4 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. We address the two main requests listed under weaknesses: > the main practice in the field is to report results on {cifar-10, ImageNet} times {𝜎=0.25,0.5,1.0} > “I request the authors to present the full benchmark for comparison.” Thank you for the suggestion. In the rebuttal PDF we report results for CIFAR-10 (Figures 10 & 11) and ImageNet (Figure 12) at these noise levels. We will update all the CIFAR-10 and ImageNet results in the paper to these standard noise levels with evaluation across three seeds. Due to time and computational constraints during the rebuttal phase we have run a single seed for ImageNet experiments. The CIFAR-10 results include include standard CIFAR-10 (without backgrounds, k=32) and larger background images (k=64). As expected, our approach only yields modest gains on CIFAR-10 without backgrounds, as the mask cannot provide much dimension reduction (although we still see that the masks select relevant parts of the image). With large backgrounds, the improvements are larger as shown in Figure 11 (b). These rebuttal results are an improvement after finding and fixing a pre-processing inconsistency that led to suboptimal masks (this issue only affected ARS, on CIFAR-10, and no other results). > “In addition, to compare with Cohen et al., the masking trick could also be applied to certify 𝐿2 robustness. It is not clear why the authors only report 𝐿∞ robustness in the paper.” While the ARS theory (f-DP based RS and composition) applies to L2, our specific instantiation using masking applies only to 𝐿∞ robustness, in the sense that it implies no gains for L2 robustness. In particular, the reduction of the noise variance in the second step (Eq 6) is valid only against 𝐿∞ adversaries, and we would not be able to reduce noise this way for an L2 adversary (this is discussed in remark 3, ll. 177-181). Intuitively, this is because in the worst case, an L2 attack could be fully applied to pixels with a mask weight of 1, and so masking does not trigger noise reductions and our approach reduces to standard RS with two steps averaging the noise. This is a core reason why we do not present L2 results. That being said, it is plausible that using multi-step composition, one could design architectures that improve L2 robustness (a potential avenue of research for that is to leverage the f-DP analysis of the subsampled Gaussian mechanism: https://arxiv.org/abs/1911.11607). This however would require significant design and analysis work, which is not yet explored, and in this submission we leave it for future work. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal from authors. This generally clears my previous concerns. I maintain my previous score.
Summary: This paper proposes a framework to derive robustness certificates for smoothed classifiers based on f-differential privacy (f-DP). The framework achieves the same tight certificate for Gaussian smoothed classifiers as Cohen et al. (2019), while enabling analysis of adaptive multi-step smoothing mechanisms using the adaptive composition theorem for f-DP. To demonstrate the framework, the authors design a two-step smoothing mechanism that effectively performs adaptive (input-dependent) dimension reduction to achieve robustness against $\ell_\infty$-bounded perturbations. Experiments on three datasets demonstrate substantial improvement over baselines in some cases. Strengths: **Originality and significance:** This paper reconnects randomized smoothing and differential privacy, showing that f-DP can achieve competitive robustness certificates. By identifying this connection, the paper could seed further innovations in randomized smoothing, where the design of smoothing mechanisms has been limited due to challenges in obtaining tractable robustness certificates. This has been a problem in prior work, where the analysis of smoothing mechanisms using input-dependent additive noise (e.g., Hong et al., 2022) were later shown to be unsound. **Quality:** The technical foundation is strong, which is important for work that aims to achieve provable robustness. The experiments are solid, covering a range of settings, including those where the adaptive masking approach does not achieve a significant performance advantage (ImageNet). In some cases the improvement in certified accuracy/standard accuracy over baselines is quite significant (of order 10 percentage points). **Clarity:** The paper was a pleasure to read. Weaknesses: The proposed adaptive masking smoothing mechanism does not achieve significant improvements in certified robustness over baselines in many cases, especially for CIFAR-10 (Fig 2) and ImageNet (Fig 6). There is a more pronounced improvement for CelebA, however I wonder if this is related to the fact that the masking model is trained under supervision or whether the masking smoothing mechanism is simply a better fit for CelebA? This could perhaps be investigated by repeating the experiment without using the mouth location metadata. Given the (at times) marginal improvement of the adaptive masking smoothing mechanism, it would be interesting to understand how it compares along other dimensions, like computational efficiency and model size. It’s noted (line 324) that the mechanism is less efficient and uses a larger model, however it would be good to quantify to better understand the trade-offs. Theorem 2.3 applies for a composition of randomized Gaussian mechanisms, which may be limiting given the range of mechanisms studied in the DP literature. I expect the theorem could be adapted for other mechanisms, but I wonder whether this would result in a tractable radius? Technical Quality: 4 Clarity: 3 Questions for Authors: - To better understand the generality of the approach: can a certified radius be derived for a composition of heterogeneous f-DP mechanisms (not necessarily all Gaussian)? - Is there an intuitive explanation for the larger gap in certified accuracy between ARS and Cohen et al. for CelebA versus ImageNet? - Are results available for a non-uniform split of the noise budget (line 223)? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes, limitations are addressed adequately in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. We start by answering the main questions in the review and then address further comments. > Q1. “[...] can a certified radius be derived for a composition of heterogeneous f-DP mechanisms [...]?” This is a great question! As noted in the review, the theory directly applies. However, there are two key elements to compute the bounds that are not necessarily trivial to get for general mechanisms: (1) the f-DP curve of a mechanism at any attack radius value r (to compute the sup line 471); (2) the composition of such curves across mechanisms (this is really easy for Gaussian mechanisms, but more challenging for other mechanisms even without heterogeneity). Fortunately, the DP community already has several tools that one could leverage to explore this design space. Notable examples that we believe are promising include: Using results from deep learning training with f-DP that study efficient composition of subsampled Gaussian mechanisms (https://arxiv.org/abs/1911.11607). This would help support steps that first subsample pixels and then apply the Gaussian mechanism (though some work is needed on the subsampling front). Using numerical tools for f-DP curves and composition (e.g., https://arxiv.org/abs/2106.08567, https://arxiv.org/abs/2106.02848) to support more complex mechanisms and heterogeneous composition. > Q2. “Is there an intuitive explanation for the larger gap in certified accuracy between ARS and Cohen et al. for CelebA versus ImageNet?” We would first like to address one possible reason suggested in the review: > “There is a more pronounced improvement for CelebA, however I wonder if this is related to the fact that the masking model is trained under supervision or whether the masking smoothing mechanism is simply a better fit for CelebA?” Thank you for the great suggestion. We re-ran our CelebA experiments, but without supervision for the mask, and it turns out that the supervision was not needed. The results are essentially unchanged (if anything, the variance is lower due to better masks) and visually the masks are a bit better (likely because our supervision was a coarse, approximate bounding box around the mouth, while masks without supervision are more precise). Please see Fig. 8 and 9 in the rebuttal PDF attached to the global response. As to why the results on CelebA are much better than on other tasks, we believe that it is because the CelebA task is very well suited to gains from dimension reduction, which our approach provides. Indeed, our task on CelebA has a fairly high input resolution for certification (160x160), while the relevant part for the prediction (the mouth) is often as small as 15x15. We can see in the mask images in the attachment that the mask model accurately selects the relevant mouth area, and the final image is much clearer at that position due to noise reduction from the lower dimension. We investigated why we did not see similar improvements on CIFAR-10 with large $k$. We found a pre-processing inconsistency between our steps that led to suboptimal masks (this only affected CIFAR-10 experiments for ARS and no other results). Fixing this led to crisper masks, and improved performance (see Figures 11a for results with $k=64$ that lead to gains similar as those seen for CelebA). On ImageNet, we do not expect such gains as the images are more complex, and relevant parts of the image are fairly large compared to the total dimensions. > Q3. “Are results available for a non-uniform split of the noise budget (line 223)?” We did not observe much gains from early experiments learning the split or using non-uniform splits, so we do not pursue this for this paper and do not have extensive results. However, it is plausible that more extensive tuning of this parameter can yield improvements, and it would be interesting to explore this avenue in more depth. > “It’s noted (line 324) that the mechanism is less efficient and uses a larger model, however it would be good to quantify to better understand the trade-offs.” We provide some numbers on ll. 271-273 of the submission (the time overhead for certification compared to Cohen et. al. is about 2x). We will add more details on mask model size and impact on training time and resources in that paragraph and Appendix D. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses to my questions. I have decided to raise my score for soundness and the overall rating. It will be interesting to see further exploration of this framework, particularly as advancements are made in f-DP.
Summary: The paper uses tools from differential privacy (f-DP relating privacy to hypothesis testing, and also compositional rule of DP) to design a differentially private 2-step mechanism. The first step is interpreted here as creating a mask, and the second step is the "standard randomized smoothing" on the masked input, so it potentially greatly reduces the dimensionality. Strengths: The main result is Proposition 2.4, where it is shown that we can use two step randomized smoothing, where in the first step we effectively obtain a mask to mask out some of the input pixels, and by this effectivelly escaping the curse of dimensionality we have in randomized smoothing and the we use the standard smoothing on the masked input. The soundness is guaranteed by tools from differential privacy, framing the adversarial problem in the privacy language and use the standard (i suppose) tools to perform a private compositional test. The neighbouring databases (in the privacy sense) are formulated as $\ell_\infty$ distance in the image space, which exactly matches the setting of adversarial robustness (this was considered in the very first RS paper already that also took the DP approach to randomized smoothing). The originality comes from bringing standard tools from a different field and demonstrates that they are effective here. Quality is high, the results and ideas are fresh and new in the field and improve on SotA. Clarity is somewhat good but not perfect, the authors seem to come from DP community (I'm from robustness) so It requires some effort and reading other papers for me to follow. Significance is on the higher end of the randomized smoothing papers in the recent years, the results might seem incremental on benchmarks, but the actual techniques are new so it will likely attract attention. The downside is not convincing empirical evaluation (is limited and in the standard settings the improvements are very mild, the main benefits are demonstrated on CELEBA benchmark), overall the empirical evaluation does not convince me that this approach outperforms SotA by a non-trivial margin. This being said, the work (mainly the tools used) is clearly interesting enough to be accepted. very minor: * f-DP is not really introduced I think, the $f$ appears in prop 2.1 without any explanation, right? * unify Dong 2019 and Dong 2022 citations. I think they are the same? * " adaptivity over multi-step computation comes with no increase in certified radius" - no decrease? Weaknesses: ^ Technical Quality: 3 Clarity: 3 Questions for Authors: - Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. We will add a background section on f-DP in the appendix and forward reference it before Prop 2.1 to provide more context. We will unify the citations to prefer the published edition (2022) over the arXiv edition (2019). Thank you for catching our increase/decrease typo! It is indeed no decrease. > [...] in the standard settings the improvements are very mild, the main benefits are demonstrated on CELEBA benchmark) [...] > This being said, the work (mainly the tools used) is clearly interesting enough to be accepted In the attached pdf, we provide further results for CIFAR-10 with no background and large background (to emphasize the importance of dimensionality). We investigated the degree of improvement on CIFAR-10, and found a pre-processing inconsistency between steps that led to suboptimal masks (this only affected CIFAR-10 experiments for ARS and no other results). Fixing this led to crisper masks, and improved performance at large $k$ (Figures 11b), where the results are now similar to those of CelebA. On ImageNet, we do not expect such gains as the relevant parts of the image are fairly large compared to the total dimensions. Please note the noise levels in these experiments follow the suggestion of Zvfp, and we will update all the paper results to these standard noise levels with evaluation across three seeds. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I think the experiment in 11 b is maybe a bit too artificial to convince me about the proposed method, but I am already convinced anyway :-) I keep my rating.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their constructive reviews. We provide answers to questions in our review-wise responses, and we have completed experiments following the suggestions and requests that we include in the pdf attached to this global response. Specifically: - We provide CelebA experiments without mask supervision (reviewer 7UWe), and the results are as good (if not better, as the masks are sharper) than with our coarse mask supervision. See Figures 8 & 9 (the first two figures in the rebuttal pdf). - For CIFAR-10, we investigated the degree of improvement, as highlighted by reviewers NkYs and 7UWe, and found a pre-processing inconsistency that led to suboptimal masks (this only affected ARS, on CIFAR-10 experiments, and no other results). Fixing this improves results both quantitatively and qualitatively. For $\sigma = 0.12, 0.5, 1.5$ and $k=48$ (bottom section in Table 1, last column), ARS standard certified accuracy (at $r=0$) improves to 83.66% from 82%, 66.6% from 64.8% and 35.5% from 34% with similar standard deviations as before. We also report results (masks and certified accuracy) for no background (k=32) and large background (k=64) to emphasize the importance of higher dimensionality. As expected, ARS provides real but modest gains in terms of certified accuracy on standard CIFAR-10 images (Figure 11a): the masks select relevant pixels, but dimensionality gains are small (Figure 10a). For $k=64$, the gains are much larger (Figures 11b), as the masks successfully ignore the high dimensional distractor background (Figure 10b). - We provide results at the common noise levels of sigma = 0.25, 0.5, 1.0 (reviewer Zvfp) on CIFAR-10 (Figure 11), including without background (Figure 11a), and ImageNet (Figure 12). We will update all the paper results to these standard noise levels with evaluation across three seeds. Due to time and computational constraints during the rebuttal phase, we have run a single seed for ImageNet experiments. Pdf: /pdf/72485d2a0ddbd5b8f56e0ad835389e857557aac7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
RETR: Multi-View Radar Detection Transformer for Indoor Perception
Accept (poster)
Summary: This work introduces a novel method RETR for indoor object detection and segmentation based on multi-view radar heatmaps. RETR extends the popular DETR framework and incorporates modifications specific to multi-view radar perception, such as depth-prioritized feature similarity via TPE, a trip-plane loss, and a learnable radar-to-camera transformation. Experiments on two datasets demonstrate the effectiveness of RETR for object detection and segmentation. Strengths: 1. This work utilizes multi-view radar heatmaps as input for indoor human perception, which has broad application for privacy-aware indoor sensing and monitoring. 2. This work extends image-based DETR to multi-view radar-based RETR, establishing a new baseline for radar-based human detection and segmentation. The proposal of tunable positional embedding is interesting. It can enable the adjustment of different axes for positional embeddings. 3. The experiments are comprehensive with two tasks validated on two tasks. The details of hyperparameters and implementation details are adequate, making this paper a good reference for future works. Weaknesses: 1. Most commercial radars have 2D virtual antenna arrays, e.g., 16*8, instead of only a pair of 1D antenna arrays. Is there any specific reason for considering such kinds of radar antenna arrays? Normally, the 2D heatmaps (range-elevation, range-azimuth) are generated via projecting the 3D radar cube data in two views. The explanation in Section 2 does not fit the real process of multi-view radar heatmap generation. If this work considers perception with such a unique radar antenna array, better to introduce what types of radar the dataset used. 2. In the experiments, only radar heatmap-based methods are used for comparison. Better to incorporate the results of radar point cloud-based and camera-based approaches for a more complete comparison. The survey of related work is also not complete, ignoring recent works in radar-based object detection for autonomous driving [1-3], which also use radar heatmaps: [1] Liu, Yang, et al. "Echoes beyond points: Unleashing the power of raw radar data in multi-modality fusion." Advances in Neural Information Processing Systems 36 (2024). [2] Paek, Dong-Hee, Seung-Hyun Kong, and Kevin Tirta Wijaya. "K-radar: 4d radar object detection for autonomous driving in various weather conditions." Advances in Neural Information Processing Systems 35 (2022): 3819-3829. [3] Skog, Mikael, et al. "Human Detection from 4D Radar Data in Low-Visibility Field Conditions." arXiv preprint arXiv:2404.05307 (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. At line 28, it is better to explain why radar heatmaps are more preferred for challenging perception tasks. At line 31, it's not necessary to mention if one work is publicly accessible for the introduction section. 2. The relationship and different between previous works in radar-based detection and segmentation, e.g., RFMask and RETR is not well explained. 3. Better to show the tokenization step in Figure 3 to aid in understanding the pipeline. 4. In line 135, equation 3 should be Equation 3. 5. How did the author implement the top-K feature selection step? And how to supervise such a selection step to ensure the selected features are significant for the detection task? 6. In line 229, what's the reason why the author claimed that the calibration may be only accurate for a limited interval of depth and angles. 7. The MMVR dataset can not be found online. 8. Please provide captions for table 1 and 2 to increase the readability. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and valuable feedback! We provide our point-to-point responses below. Due to space constraints, we have shortened your comments for brevity. **Most commercial radars have 2D virtual antenna arrays, e.g., 16*8, instead of only a pair of 1D antenna arrays. Is there any specific reason for considering such kinds of radar antenna arrays?** Thanks for raising this point. The main reason for using a configuration of two cascading radars in horizontal and vertical orientations is to harness `finer angular resolution` in both azimuth and elevation domains to support all three perception tasks, especially for pixel-level segmentation. Both HIBER and MMVR datasets used this configuration. Most commercial radars have a typical configuration of $4$ Rx and $3$ Tx antennas, yielding a virtual array of $8$ elements in one angular dimension and $4$ in the other. Examples include TI's IWR1443 and IWR1843 chipsets and, more recently, NXP's TEF81xx and TEF82xx series. This usually leads to an angular resolution of $15^\circ$ in one angular dimension and about $30^\circ$ in the other angular dimension. The configuration of two cascading radars ($12$ Tx and $16$ Rx) yields a virtual array of $86$ non-overlapping half-wavelength-spaced elements in both vertical and horizontal dimensions, offering an angular resolution of $1.3^\circ$, more than $10 \times$ better. The resulting high-resolution multi-view radar heatmaps can provide fine-grained radar features and support not only BBox estimation and pose estimation but also the more challenging pixel-level segmentation. **The explanation in Section 2 does not fit the real process of multi-view radar heatmap generation. Better to introduce what types of radar the dataset used.** Due to space constraints, the current version of Section 3 - Generation of Radar Heatmaps presents an abbreviated description of the full process by skipping steps such as MIMO waveform separation for virtual array processing, integration over the Doppler domain, and projection onto the azimuth and elevation domains. In the updated paper, we plan to include a new section in the Appendix to introduce the dual (horizontal-vertical) radar configuration and provide a detailed explanation of the generation of the two radar heatmaps. **In the experiments, only radar heatmap-based methods are used for comparison. Better to incorporate the results of radar point cloud-based and camera-based approaches for a more complete comparison.** Thanks for the suggestion. We have two response points: 1. `Dataset`: To the best of our knowledge (see Table 4 in the PDF of the global response), we cannot find an indoor radar dataset with both radar heatmap and point cloud formats for evaluating a given method (either RETR or a baseline). 2. `Baseline`: We have included DETR, a camera-based method, in our baseline comparisons. We will make an effort to incorporate a radar cloud-based baseline in the updated paper. **The survey of related work is also not complete, ignoring recent works in radar-based object detection for autonomous driving [1-3], which also use radar heatmaps: [1] Liu, Yang, et al... [2] Paek, Dong-Hee, ... [3] Skog, Mikael....** Thanks for pointing out these recent radar datasets featuring radar heatmaps. We will include them in the updated paper. Specifically, we plan to discuss [3] in Section 2 - Related Work, as it is particularly relevant to our task of human perception using radar heatmaps. **Line 28: better to explain why radar heatmaps are more preferred for challenging perception tasks.** In the updated paper, we plan to highlight the difference between radar heatmaps and point clouds. We will also explain how high-resolution, multi-view radar heatmaps may be better suited for supporting more challenging perception tasks, such as pixel-level segmentation. **The relationship and difference between previous works in radar-based detection and segmentation...is not well explained.** In Fig. 2 of the main paper, we intended to highlight the major differences between RFMask and the proposed RETR. RFMask uses regional proposals and features from only the horizontal radar view with a fixed height. RETR, on the other hand, employs a detection transformer, fuses multi-view radar features, and exploits the unique multi-view radar setting via TPE and radar-to-camera coordinate transformation. We will expand the `RFMask with Refined BBoxes` section in the Appendix to clarify these differences further. **Better to show the tokenization step in Figure 3....** see Fig. 2 of the PDF in the global response. **implement the top-K feature selection? And how to supervise...?** For the Top-$K$ selector, we just select feature tokens with the highest norms computed over the channel dimension. We observed that direct supervision is unnecessary for this step because radar features tend to be extremely localized. Please refer to Fig. 6 of the main paper for visualizing the cross-attention between selected features and detected BBoxes. **Line 229, why the author claimed that the calibration may be only accurate for a limited interval of depth and angles.** This is mainly due to the varying cross-range radar resolution over depth and angles as the radar operates in a polar coordinate system. For a given angular resolution, the cross-range cell resolution at $3$ meters will be about $3\times$ larger than that at $1$ meter. To compensate for such resolution differences, one may need to repeat the calibration for different depth/angular intervals. **The MMVR dataset can not be found online.** We reached out to the MMVR authors who provided us with the "P2" split at the time of submission. The MMVR dataset should be available now by searching "MMVR Dataset", although we are not allowed to share any links in the rebuttal. **Minor questions: Line 31:... Line 135: .....provide captions for table 1 and 2** We will make the changes accordingly. --- Rebuttal Comment 1.1: Comment: Thank the authors for providing such detailed response to my comments. I am looking forward to see more explanantion regarding the radar configuration and generation of the radar heatmaps in the revised version. Regarding perception methods based on radar heatmap, we find two more recent works: [1] Kong, Seung-Hyun, Dong-Hee Paek, and Sangyeong Lee. "RTNH+: Enhanced 4D Radar Object Detection Network using Two-Level Preprocessing and Vertical Encoding." IEEE Transactions on Intelligent Vehicles (2024). [2] Ding, Fangqiang, Xiangyu Wen, Yunzhou Zhu, Yiming Li, and Chris Xiaoxuan Lu. "RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar." arXiv preprint arXiv:2405.14014 (2024). Hope the citation of them could improve the rigorousness of your related work. Overall, thank you for your efforts in addressing my concerns, and I have decided to adjust my recommendation to a score of 6. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's time and effort in reviewing our rebuttal, and we're delighted to see that it has been positively received. We will certainly consider the suggested references to update our related work. Additionally, we kindly encourage the reviewer to update the score to reflect the stated intentions.
Summary: The primary content of this paper is an introduction to a multi-view radar detection transformer algorithm (RETR) for indoor perception. The algorithm achieves effective object detection in indoor environments by utilizing multi-view radar data and combining self-attention mechanisms and cross-attention mechanisms. The author validates the performance improvement of the algorithm through experiments and discusses its application in indoor perception. Strengths: 1.The idea is helpful for the field of view indoor multi-view radar perception. 2.The paper is clear and easy to follow. 3.Rigorous ablation studies were conducted, providing evidence of the proposed method's efficacy. 4.RETR, based on the original DETR, has achieved significant performance improvements on indoor multi-view radar datasets with heatmap input. Weaknesses: 1.Radar perception datasets primarily utilize point clouds as the data form. Methods based on heatmaps for indoor multi-view radar perception are not yet sufficient. The author mainly compares with customized DETR and RFMask, but the comparison is insufficient. Have the authors considered conducting further comparison experiments on the HUPR [1] dataset? If the authors can further demonstrate the method's generalization ability, I would consider giving a higher score. 2.The method presented is an improvement on the DETR and combines existing methods in a novel way. While innovative, it's not highly original for NeurIPS. 3.Research on indoor multi-view radar perception using heatmaps is relatively scarce. The author is unable to provide source code, potentially limiting contributions to this field. [1] Lee, Shih-Po, et al. "Hupr: A benchmark for human pose estimation using millimeter wave radar." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.Could you provide a detailed description of the design of the Top-k selector? 2.Can you provide a more detailed analysis of the impact of the proposed RETR on real-time performance? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and valuable feedback! We provide our point-to-point responses below. **Radar perception datasets primarily utilize point clouds as the data form. Methods based on heatmaps for indoor multi-view radar perception are not yet sufficient. The author mainly compares with customized DETR and RFMask, but the comparison is insufficient. Have the authors considered conducting further comparison experiments on the HUPR [1] dataset? If the authors can further demonstrate the method's generalization ability, I would consider giving a higher score.** Borrowing Table 1 from the MMVR paper, also listed as Table 4 in the PDF of the global response, you are correct that indoor radar perception datasets primarily use point clouds. However, an increasing number of datasets employ multi-view radar heatmaps to support more diverse perception tasks: RF-Pose, HuPR, HIBER, and MMVR, with the latter three collected since 2022. RF-Pose and HuPR have a resolution of $15^\circ$ in the two-view radar heatmaps, while HIBER and MMVR offer a finer resolution of $1.3^\circ$. Thanks for pointing out the HuPR dataset. First, HuPR focuses on `pose estimation (keypoints)`, while our work targets `object detection (BBox estimation)` and `segmentation (pixel-level masks)`. To utilize the HuPR dataset for training and evaluating our RETR pipeline, we would need to extract bounding box and segmentation labels from each HuPR frame which, unfortunately, cannot be completed within the rebuttal period. Second, even with BBox and mask labels from HuPR, our RETR pipeline requires geometric information about the radar and camera coordinate systems, including the radar-to-camera transformation (rotation matrix and translation vector) and the 3D-to-2D camera projection matrix (pinhole camera model). Both HIBER and MMVR datasets provide this calibrated information. While our learnable radar-to-camera transformation can reduce some geometry dependency, the 3D-to-2D projection matrix is still necessary. `We have reached out to the HuPR authors for this additional geometric information.` Once we receive it, we will report the RETR performance on at least BBox estimation. Third, we believe that the proposed RETR pipeline and its evaluation over `two separate datasets` (HIBER and MMVR) have sufficiently demonstrated the generalization capability. It is noted that the radar-to-camera geometries in these datasets are completely different but known (via calibration or learning). RETR is purposely designed to handle these differences by incorporating calibrated/learnable radar-to-camera transformations, 3D-to-2D projections, and tri-plane loss functions. **The method presented is an improvement on the DETR and combines existing methods in a novel way. While innovative, it's not highly original for NeurIPS.** We believe our contributions to be novel and original as they are not a simple combination of existing methods. Indeed it is true that we build upon DETR, yet we propose several contributions that make our RETR unique. First, we highlight how we adapt DETR to the multi-view scenario thanks to the self-attention mechanism combined with the Top-K selection and by reusing the cross-attention mechanism to avoid traditional object-to-multi-view-feature association. Second, we introduce a depth-prioritized feature similarity via a tunable positional embedding (TPE), incorporating a crucial inductive bias of shared depth across the two radar views to enhance downstream tasks. Third, we propose a tri-plane loss from both radar and camera coordinate systems which, to the best of our knowledge, has never been considered for object detection from radar heatmaps. **Research on indoor multi-view radar perception using heatmaps is relatively scarce. The author is unable to provide source code, potentially limiting contributions to this field.** We plan to release our code after the paper acceptance. **Question 1: Could you provide a detailed description of the design of the Top-k selector?** Regarding the Top-$K$ selection, we begin with the features map extracted from the shared backbone. Each cell in the feature map can potentially be considered as a patch/token for input into the subsequent transformer encoder. To alleviate the time complexity of the attention module, we select only the tokens with the highest norms computed over the channel dimension. In this way, we propagate only the most relevant information to the following modules and increase the inference performance, while keeping the complexity low. In Fig. 6 of the main paper, we visualize selected top features (and their locations in the radar heatmap) in the two radar views that are used to detect bounding boxes in the image plane by inspecting the cross-attention module. In Table 1 of the PDF in the global response, we have included an additional ablation study examining the impact of $K$ on detection performance. **Question 2: Can you provide a more detailed analysis of the impact of the proposed RETR on real-time performance?** We report the inference time in Table 3 of the PDF in the global response. We used an NVIDIA A40 GPU for evaluation. RETR has a comparable inference time as RFMask ($20.89$ ms against $23.75$ ms of RETR). We compute the average inference time across all radar frames in the test data. --- Rebuttal Comment 1.1: Comment: Thank you for your response and the detailed clarification of my comments. The explanation regarding the use of the HuPR dataset and the challenges faced in obtaining labels such as Bounding Boxes is indeed insightful. This complexity could potentially limit the scope of your current evaluation, especially during the rebuttal period. Your explanation of the Top-K selection mechanism and the related experiments on inference time further strengthen the argument for the effectiveness of RETR. Overall, I believe your research contributions are significant and closely related to the field. The decision to open-source the code will have a positive impact on the research community for radar perception tasks. Thank you for your efforts in addressing my concerns, and I have decided to adjust my recommendation to a score of 6. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's time and effort in reviewing our rebuttal, and we're delighted to see that it has been positively received. We will certainly consider the suggested improvements for the updated paper.
Summary: In this paper, the authors propose Radar dEtection TRansformer (RETR), an extension of the popular DETR architecture, tailored for multi-view radar perception. RETR inherits the advantages of DETR, eliminating the need for hand-crafted components for object detection and segmentation in the image plane. Strengths: Two radars with different directions are deployed. Transformer is applied for segmentation. Weaknesses: For indoor perception with radar, multi-path is expected. It would be good to add a paragraph to discuss this issue and how to mitigate multi-path or its impact to the perception/segmentation performance. The motivation of applying transformer for indoor segmentation is not well discussed. What are the benefits and unique challenges of applying transformer to radar based indoor segmentation? For the introduction of Generation of Radar Heatmaps, please consider cite the following paper: S. Sun, A. P. Petropulu and H. V. Poor, "MIMO Radar for Advanced Driver-Assistance Systems and Autonomous Driving: Advantages and Challenges," in IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 98-117, July 2020. Technical Quality: 3 Clarity: 3 Questions for Authors: Is there any interference (cross-path signals) between the two radars deployed in vertical and horizontal directions? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Usually a large amount of high quality radar data is required to train the transformer. It is highly recommended to carry out validation with different amount of training data. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and valuable feedback! We provide our point-to-point responses below. **For indoor perception with radar, multi-path is expected. It would be good to add a paragraph to discuss this issue and how to mitigate multi-path or its impact to the perception/segmentation performance.** Thanks for the insightful comment. We agree that multi-path reflections from the ground, ceiling, and other strong scatterers (e.g., metal) can cause (first-order or second-order) ghost targets and elevate the noise floor. One way to address this issue is to employ classical signal processing techniques into the radar heatmap generation to remove these ghost targets and the static background reflection. We may address this issue directly in the end-to-end radar perception pipeline by labeling these ghost targets in standard radar heatmaps (although it is difficult and costly to have these labels) and directly classifying RETR object queries to one of {$\{\emptyset, person, ghost\}$}, alongside regressing queries to the bounding box parameters. As you suggested, we will add a paragraph to discuss the multi-path issue and potential ways to mitigate its impact. **The motivation of applying transformer for indoor segmentation is not well discussed. What are the benefits and unique challenges of applying transformer to radar based indoor segmentation?** We agreed that in the main paper, we primarily used the object detection example to motivate the RETR, while the segmentation part was deferred to Appendix B Segmentation. In the updated paper, we will emphasize that the segmentation is an integral part of the RETR by highlighting the following points in the main paper: 1. The segmentation head uses the estimated bounding box (BBox) as a prior or constraint to classify each pixel within the BBox (see Fig. 8 Illustration of Segmentation Head in Appendix). 2. The pretrained RETR components, such as the backbone, top-$K$ selection, and the detection transformer are reused with frozen weights to train the segmentation head (the lower branch) in Fig. 3 of the main paper. We will point out the challenges in extracting finer-grained radar features, fusing features from two radar views, and utilizing the prior BBox from the detection head to support the pixel-level segmentation task. We will emphasize how to address these challenges by leveraging the DETR architecture to avoid cross-view radar feature association and introducing additional modifications, such as tunable positional embedding, radar-to-camera coordinate transformation, and tri-plane loss. **For the introduction of Generation of Radar Heatmaps, please consider citing the following paper: S. Sun, A. P. Petropulu and H. V. Poor, "MIMO Radar for Advanced Driver-Assistance Systems and Autonomous Driving: Advantages and Challenges," in IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 98-117, July 2020.** Thank you for the suggestion. We will add the suggested paper to the reference list. **Is there any interference (cross-path signals) between the two radars deployed in vertical and horizontal directions?** Thanks for raising this important point. To prevent cross-radar interference between the horizontal and vertical radar sensors, the two radars were configured to operate at different frequency bands. In the HIBER dataset, the horizontal radar operates in the $77 - 78.23$ GHz band, while the vertical radar is in the band of $79-80.23$ GHz. In the MMVR dataset, the horizontal radar operates in the $77-78.36$ GHz band, while the vertical radar is in the band of $79-80.36$ GHz. In both cases, there is a minimum gap of $500$ MHz between the operating frequency bands of the two radars. **Usually a large amount of high quality radar data is required to train the transformer. It is highly recommended to carry out validation with different amount of training data.** Good point. In Table 2 of the PDF in the global response, we report the impact of training data size on detection performance using the MMVR dataset. We compare the original data size (x1.0) with $190,441$ radar frames against reduced data sizes of half (x0.5) and one-tenth (x0.1). The result shows a gradual improvement in detection performance with an increase in data size, particularly at higher IoU thresholds, such as AP$_{75}$.
Summary: The paper introduces Multi-View Radar Detection Transformer for indoor object detection. Inspired from DETR, the authors propose and end-to-end RETR to detect objects from the radar inputs. To improve the feature association across the two radar views, the authors introduce a new Tunable Positional Embedding. The proposed approach achieves the solid performance on the standard benchmarks of radar-based indoor object detection. Strengths: - The paper is well-written and easy to follow. - The proposed approach is simple yet well-motivated. The proposed RETR eliminates the cumbersome design of the detection network for radar inputs. - The analysis of Tunable Positional Embedding Section 4.3 is comprehensive and well-motivated. - The proposed method achieves strong experimental results on MMVR and HIBER datasets. Weaknesses: - I am wondering how the Top-K Feature selection impacts the performance of the network. How can we choose the number of K? It seems that there is no experiment to validate the choice of K. It will be better if the authors conduct an ablation study to explore the effectiveness of choosing K. - What is the computation cost of the proposed RETR model? It will be better if the authors report the inference time/computational cost of the proposed model - For the learnable radar-to-camera coordinate transformation, while I acknowledge the performance improvement according to this proposed model, I have several questions related to this module. - - Does the entire dataset use the same set of learnable vector $\omega$ and translation vector $t$? Or each radar sample in the data will have a different vector $\omega$ and translation vector $t$? - - How can we verify that the learnable radar-to-camera coordinate transformation is accurate? Is there any way to evaluate it using the ground truths? Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to my weakness section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations and broader impact in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and valuable feedback! We provide our point-to-point responses below. **I am wondering how the Top-K Feature selection impacts the performance of the network. How can we choose the number of K? It seems that there is no experiment to validate the choice of K. It will be better if the authors conduct an ablation study to explore the effectiveness of choosing K.** We appreciate Reviewer DJHb's suggestion. In response, we have conducted additional experiments to examine the impact of $K$ on detection performance. The results, detailed in Table 1 of the PDF in our global response, indicate that increasing $K$ improves object detection performance. However, as noted in our detailed response below to your next comment on the computation cost and inference time, choosing a larger $K$ also significantly increases the training and inference time. **What is the computation cost of the proposed RETR model? It will be better if the authors report the inference time/computational cost of the proposed model.** Thanks for pointing out the computation cost and inference time. First, following the computational complexity notation used in the DETR paper, every self-attention mechanism in the encoder has a complexity of $\mathcal{O}(d^2 2K + d (2K)^2)$ where $d$ is the embedding dimension and $K$ is the number of selected features from the Top-$K$ selection. The cost of computing a single query/key/value embedding is $\mathcal{O}(d' d)$ (with $d=Md'$ where $M$ denotes the number of attention heads and $d'$ the dimension in each head), while the cost of computing the attention weights for one head is $\mathcal{O}(d' (2K)^2)$. Other computations may be negligible. In the decoder, each self-attention mechanism has a complexity of $\mathcal{O}(d^2 N + d N^2)$ where $N$ is the number of queries, and the cross-attention between query and multi-view radar features has a complexity of $\mathcal{O}(d^2(N + 2K) + d 2NK)$. In conclusion, the overall complexity of our RETR model is $\mathcal{O}(4d^2 K + 4d K^2 + 2d^2 N + d N^2 + 2d NK)$. Second, regarding the inference time, we report the average inference time in milliseconds in Table 3 of the PDF in the global response. We used an NVIDIA A40 GPU to evaluate the inference time over all frames in the test data. RETR achieved an average inference time of $23.75$ ms that is comparable to $20.89$ ms of RFMask. **For the learnable radar-to-camera coordinate transformation, while I acknowledge the performance improvement according to this proposed model, I have several questions related to this module. Does the entire dataset use the same set of learnable vector and translation vector ? Or each radar sample in the data will have a different vector and translation vector ?** The learned vectors $\omega$ and $t$ (or, equivalently, the rotation matrix $R$ and translation vector $t$) are fixed and applied consistently to all test frames. During training, $\omega$ and $t$ were updated from one minibatch to the next, as they are part of the learnable parameters in RETR. **How can we verify that the learnable radar-to-camera coordinate transformation is accurate? Is there any way to evaluate it using the ground truths?** Good question. One way to verify the coordinate transformation learning is, for a given point in the 3D radar coordinate, by checking the distance between the two transformed points using calibrated and learned coordinate transformation. Although the calibrated coordinate transformation (including both rotation matrix and translation vector) is NOT ground truth due to radar resolution (also see our responses to Reviewer XrXa), it serves as a reasonable baseline or reference benchmark for the learned coordinate transformation. In the PDF of the global response, Fig.1 shows the distance difference between the two transformed points over the training steps using the MMVR dataset. We randomly initialize the learned rotation matrix and translation vector at iteration 1. The results demonstrate that, as training progresses, the learned radar-to-camera coordinate transformation becomes increasingly aligned with the calibrated one, indicating that the learning is moving in the correct direction. Finally, we'd like to add that by using the learnable radar-to-camera coordinate transformation, it is possible to incorporate the radar-to-camera geometry into the end-to-end radar perception pipeline without the need for a cumbersome calibration step, while still achieving comparable perception performance. --- Rebuttal Comment 1.1: Title: Feedback to Author Rebuttal Comment: Thank the authors for the good rebuttal. It has addressed my concerns. I hope that you can include these answers and experiments in your revised version. Therefore, I decided to increase my score to 6. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's time and effort in reviewing our rebuttal, and we're delighted to see that it has been positively received. We will certainly consider the suggested improvements for the updated paper.
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful comments, suggestions and questions. In the rebuttal form to each reviewer, we provide detailed point-to-point responses. In these point-to-point responses, we often refer to the attached PDF for additional results in the form of tables and figures. Pdf: /pdf/c5f4a7d7c3919cfd2c41ea2edc4067281f55f049.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Swift Sampler: Efficient Learning of Sampler by 10 Parameters
Accept (poster)
Summary: The paper introduces Swift Sampler (SS), an efficient algorithm for the automatic learning of data samplers in deep learning model training. SS addresses the challenges of high-dimensionality, sharpness, and costly evaluation in sample-based methods by mapping samplers to a low-dimensional space of hyper-parameters and employing a novel transform function to smooth the objective function. Utilizing Bayesian Optimization, SS quickly examines the quality of samplers through an approximation method that significantly reduces computational expense. Comprehensive experiments on tasks like image classification and face recognition across various datasets, including ImageNet and CIFAR, demonstrate SS's effectiveness in improving model performance, with notable improvements such as a 1.5% increase on ImageNet. The samplers learned by SS also exhibit good transferability across different neural network architectures, showcasing the algorithm's generality and computational efficiency, making it a valuable contribution to the field of deep learning. Strengths: 1. The work writing is easy to read. 2. A lot of experiments were done to make the verification more convincing. 3. The community of Artificial Intelligence is in great need of suitable selection methods for data. Weaknesses: 1. No ablation studies or sensitivity analyses were performed to show the effects of different components or hyperparameters of the method. 2. The verification of the method focuses on the image classification task, and it would be more comprehensive to see the experimental results of more complex tasks. 3. I think the work in this paper is more similar to the research related to **active learning**, and I hope to be able to see the difference between this work and **active learning** in related work. Technical Quality: 2 Clarity: 2 Questions for Authors: See the Weaknesses section. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: It is recommended to use more complex tasks to verify the validity of the method, such as detection segmentation in vision, VQA in language, etc Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **W1: The effects of different components or hyperparameters of the method.** Thank you for your valuable feedback. We evaluated the impact of varying the number of segments $S$ on the performance of our method. The experiments were conducted on the CIFAR10 dataset with a noise rate of 20%. We tested $S = 2, 4, 6, 8$. As shown in **Table 5** of the one-page PDF, the performance improves significantly when increasing $S$ from 2 to 4. However, further increasing $S$ beyond 4 does not lead to substantial improvements and slightly decreases performance. Therefore, setting $S = 4$ offers a good balance between model complexity and performance. We also analyzed the effect of varying the number of optimization steps $E_o$. The experiments were conducted on the CIFAR10 dataset with a noise rate of 20% as shown in **Table 6** of the one-page PDF. We tested $E_o = 20, 40, 60, 80$. The results indicate that increasing $E_o$ from 20 to 40 leads to a noticeable improvement in performance. Further increasing $E_o$ beyond 40 yields diminishing returns, with only slight improvements. Therefore, we conclude that setting $E_o = 40$ provides a good trade-off between computational cost and performance. --- ### **W2: Experimental results of more complex tasks** We appreciate the reviewer's insightful comments and recognize the significance of evaluating our method in more practical and challenging scenarios. To address the concerns raised, we have conducted additional experiments on more complex tasks. 1. **Foundation Model Training on Large-Scale Datasets** We applied SS to the LAION-5B dataset, which consists of 5 billion images, using the GPT-3 model architecture for training. Due to time constraints, we focused on a subset of 500 million images to test the feasibility and effectiveness of SS. The training was conducted on a cluster with 32 NVIDIA A100 GPUs. Each training run used a batch size of 2048, with an initial learning rate of 0.1, decayed by 0.1 at the 30th, 60th, and 90th epochs, over a total of 100 epochs. As shown in **Table 1** of the one-page PDF, compared to the baseline uniform sampling method, SS improved the convergence speed by 25% and the final top-1 accuracy by 2.3% (from 72.4% to 74.7%). 2. **Training with Limited Data** We tested SS on a few-shot learning task using the Mini-ImageNet dataset. The dataset was split into 1-shot, 5-shot, and 10-shot scenarios. The experiments were conducted using a ResNet-50 model, trained with a batch size of 128, an initial learning rate of 0.01, and using SGD with Nesterov momentum set to 0.9. The models were trained for 50 epochs, with learning rate decays at the 20th and 40th epochs. As shown in **Table 2** of the one-page PDF, SS improved the accuracy in the 1-shot scenario by 5.2% (from 47.6% to 52.8%), in the 5-shot scenario by 4.3% (from 63.1% to 67.4%), and in the 10-shot scenario by 3.1% (from 70.3% to 73.4%). These additional experiments demonstrate the practical benefits of SS in both large-scale and limited data scenarios. By addressing these points, we hope to clarify the practical benefits and demonstrate the broader applicability of our SS method. --- ### **W3: The difference between this work and active learning.** Active learning (AL) is a well-studied area where the goal is to selectively query the most informative data points for labeling to improve model performance while minimizing the labeling effort. The key distinctions are outlined as follows: 1. **Objective:** - **Active Learning:** The primary goal is to reduce the labeling cost by selecting the most informative samples from an unlabeled pool to be labeled by an oracle. - **Swift Sampler:** Our objective is to optimize the sampling probabilities of already labeled training data to improve the model's convergence and performance. SS focuses on adjusting the importance of labeled data rather than acquiring new labels. 2. **Data Pool:** - **Active Learning:** Works with an initially large pool of unlabeled data and iteratively selects samples for labeling. - **Swift Sampler:** Operates on a fixed, fully labeled training dataset, optimizing the sampling strategy to enhance training efficiency and model accuracy. 3. **Methodology:** - **Active Learning:** Utilizes strategies like uncertainty sampling, query-by-committee, and expected model change to identify which unlabeled samples would be most beneficial to label. - **Swift Sampler:** Employs a low-dimensional representation of sampling strategies and Bayesian Optimization to find the optimal sampling probabilities for existing labeled data. 4. **Application:** - **Active Learning:** Commonly used in scenarios where obtaining labeled data is expensive, such as medical imaging and rare event detection. - **Swift Sampler:** Applicable to scenarios where large labeled datasets are available, and the goal is to improve training efficiency and performance, such as large-scale image classification and natural language processing tasks. --- ### **Limitations** Thank you for your valuable feedback. In addition to the two experiments mentioned in the response to **Weakness 2**, we also conducted experiments on different types of data. We employed SS on the Wikitext-2 dataset for language modeling tasks. The target model used was Wiki-GPT. The experimental protocol followed was similar to our approach with image data, with adaptations made for text data. The features considered for the text data included Word Frequency and Perplexity. The results of our experiments on the Wikitext-2 dataset are presented in the **Table 3** of the one-page PDF. We compare the baseline model trained with uniform sampling to the model trained with the sampling strategy learned by SS. We hope these additional experiments address your concern and show the broader applicability and contribution of our method. Thank you again for your valuable feedback. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification, it solved my problem. I will increase the rating from 4 to 5. --- Reply to Comment 1.1.1: Comment: We are very grateful for your recognition. We will incorporate the experimental results and the difference between this work and active learning into the main paper, following your valuable suggestions.
Summary: The paper focused on designing a learnable training data sampler to improve the model performance. A method named Swift Sampler (SS) is proposed, which is formulated as a function mapping data feature to sampling probabilities, represented by a small number of parameters. In addition, the SS Smooths the objective function landscape to improve optimization efficiency and uses an approximation method to efficiently evaluate candidate samplers without full retraining. The experimental results for image classification tasks on CIFAR-10, CIFAR-100, ImageNet, and face datasets show improved performance. Strengths: 1. The proposed method is novel with only a few parameters, enabling application to large datasets. 2. Objective function smoothing, and approximation methods improve search efficiency. 3. The solution is reasonable. The proposed inner loop and outer loop pipeline with Dimension Reduction, Smooth the Objective Function, and Local Minima Approximation designs are innovative. 4. Demonstrates consistent improvements over baselines and some existing methods on multiple datasets for image classification task. Weaknesses: 1. Can the author explain the effectiveness of components separately? 2. The paper does not provide enough theoretical analysis or justification for its proposed formulation, transform function, and approximation method. Can the author provide more profound justification? 3. The paper does not conduct ablation studies or sensitivity analysis to show the impact of different components or hyper-parameters of its method. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Can the author explain the effectiveness of components separately? 2. The paper does not provide enough theoretical analysis or justification for its proposed formulation, transform function, and approximation method. Can the author provide more profound justification? 3. The paper does not conduct ablation studies or sensitivity analysis to show the impact of different components or hyper-parameters of its method. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The experiments are only conducted for image classification tasks, while its effectiveness on other vision tasks is not clear. Since this paper focuses not only on image classification, but other tasks should also be discussed as well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **W1: Can the author explain the effectiveness of components separately?** Thank you for your valuable feedback. Each component of the SS method addresses a specific challenge in optimizing the sampling probabilities: 1. **Low-Dimensional Representation:** Reduces the search space, making optimization feasible. 2. **Bayesian Optimization:** Efficiently searches the reduced space, balancing exploration and exploitation. 3. **Smoothing Transform Function:** Creates a more tractable optimization landscape by evening out gradient distributions. 4. **Local Minima Approximation:** Reduces computational cost by leveraging shared pre-trained models for fine-tuning. --- ### **W2: More theoretical analysis or justification.** Thank you for your valuable feedback. The high-dimensional nature of sampling probabilities makes direct optimization computationally infeasible. To address this, we represent the sampling probabilities using a low-dimensional parameterization, which allows us to capture the essential characteristics of the sampling distribution with fewer parameters. We approximate the true sampling function $\tau(x)$ as: \begin{equation} \hat{\tau}(x) = H(T(G(x))) \end{equation} where \begin{equation} G(x) = \sum_{i=1}^{N} \boldsymbol{c}_i \cdot \boldsymbol{f}_i(x) \end{equation} $H$ is a piecewise linear function, and $T$ is a smoothing transform function. This approach leverages the theory of functional approximation. The objective function can exhibit sharp variations due to variable gradients. A smoothing transform function redistributes gradients more evenly. The cumulative gradient function $(cgf)$ is defined as: \begin{equation} T(u) = \text{cgf}(u) = \frac{\sum_{x_i \in D_t, G(x_i) \leq u} \text{grad}(x_i)}{\sum_{x_i \in D_t} \text{grad}(x_i)} \end{equation} This transformation ensures a uniform gradient distribution, making the optimization landscape smoother. Training from scratch for each sampling parameter is computationally expensive. We approximate local minima by fine-tuning from a shared pre-trained starting point. Let $\boldsymbol{w}_{\text{pre-trained}}$ be the weights of a pre-trained model. The fine-tuning process adjusts these weights to approximate the local minima: \begin{equation} \boldsymbol{w}^* \approx \text{FineTune}(\boldsymbol{w}_{\text{pre-trained}}, \tau) \end{equation} This method leverages the principles of transfer learning and fine-tuning. --- ### **W3: Impact of different components or hyper-parameters of its method.** Thank you for your valuable feedback. We agree that such an analysis would be valuable for practitioners implementing our approach. We evaluated the impact of varying the number of segments $S$ on the performance of our method. The experiments were conducted on the CIFAR10 dataset with a noise rate of 20%. We tested $S = 2, 4, 6, 8$. As shown in **Table 5** of the one-page PDF, the performance improves significantly when increasing $S$ from 2 to 4. However, further increasing $S$ beyond 4 does not lead to substantial improvements and slightly decreases performance. Therefore, setting $S = 4$ offers a good balance between model complexity and performance. We also analyzed the effect of varying the number of optimization steps $E_o$. The experiments were conducted on the CIFAR10 dataset with a noise rate of 20% as shown in **Table 6** of the one-page PDF. We tested $E_o = 20, 40, 60, 80$. The results indicate that increasing $E_o$ from 20 to 40 leads to a noticeable improvement in performance. Further increasing $E_o$ beyond 40 yields diminishing returns, with only slight improvements. Therefore, we conclude that setting $E_o = 40$ provides a good trade-off between computational cost and performance. --- ### **Limitations** We appreciate the reviewer's insightful comment regarding the generalizability of our method beyond image data. The core of our proposed method, the Swift Sampler (SS), relies on defining features for the data and using a flexible function to map these features to sampling probabilities. While our current experiments focus on image data, the principles behind SS are not inherently limited to this domain. Specifically, in our formulation, the choice of features (e.g., loss, renormed entropy) plays a crucial role. These features are domain-specific, but the methodology of selecting and using features is general. To address this concern, we employed SS on the Wikitext-2 dataset for language modeling tasks. The target model used was Wiki-GPT. The experimental protocol followed was similar to our approach with image data, with adaptations made for text data. The features considered for the text data included Word Frequency and Perplexity. The results of our experiments on the Wikitext-2 dataset are presented in the following table. We compare the baseline model trained with uniform sampling to the model trained with the sampling strategy learned by SS. | **Methods** | **Validation Set** | **Test Set** | |:-----------:|:------------------:|:------------:| | Baseline | 24.1 | 23.5 | | SS | **22.4** | **21.7** | *Table 3: Comparison of perplexity of Wiki-GPT on Wikitext-2 with and without SS. The number pairs indicate perplexity on the Wikitext-2 validation and test sets respectively.* We hope these additional experiments address your concern and show the broader applicability and contribution of our method. Thank you again for your valuable feedback. --- Rebuttal Comment 1.1: Comment: After reviewing the authors' rebuttal, I am pleased to see that they have thoroughly addressed all of my questions and concerns. The additional experiments provided demonstrate that Swift Sampler is a versatile data sampling method applicable to various types of data, including images and text. I particularly appreciate that the authors showed its effectiveness across both large-scale and small-scale datasets. Moreover, the added theoretical analysis further enhances the credibility of this research. Given the sufficient novelty of the proposed method and the improvements made in response to the review, I am raising my score to 7. --- Rebuttal 2: Comment: We are very grateful for your recognition. We will integrate these results into the next version of our paper based on your valuable feedback.
Summary: The purpose of this paper is to create a sampler that can assign appropriate sampling probabilities to training data in order to improve performance. Unlike previous approaches that relied on heuristic rules or expensive learning methods, this paper proposes an automatic and efficient sampler search algorithm called SS. Specifically, SS employs a new formulation to map a sampler to a lower-dimensional space of hyper-parameters and uses an approximated local minimum to quickly evaluate the quality of a sampler. SS can be applied on large-scale data sets with high efficiency and leads to performance gains on various datasets, e.g., CIFAR10, CIFAR100, ImageNet-1k, and YTF. Strengths: (1) The motivation of this paper is clearly illustrated, and it is convincing. How to efficiently and effectively search for a proper data sampling policy is important. (2) The solution is reasonable. The proposed inner loop and outer loop pipeline with Dimension Reduction, Smooth the Objective Function, and Local Minima Approximation designs are innovative. (3) The performance reported in different models in Table 2 shows the generalizability of the proposed method. Weaknesses: (1) The outer loop searches for the sampler that has the best score on the validation set. Are the final results in experiments also reported on this validation set? The author should clarify this potential misleading. (2) The performance gains on Swin are less than those of RN and SRN. Suppose this is related to different optimizers (SGD v.s. AdamW) or building blocks (Conv v.s. Transformer). The author may show some discussions and experiments. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **W1: Are the final results in experiments also reported on the validation set?** Thank you for your insightful feedback. In the manuscript, we utilize two distinct validation sets: 1. **Outer Loop Validation Set:** Used within the Bayesian Optimization process to evaluate and search for the optimal sampler. This set is employed during the training phase to guide the optimization process. 2. **Evaluation Validation Set:** A separate set used to report the final performance results of our model in the experiments. This set is not used during the training or optimization process and is reserved solely for the final evaluation to ensure an unbiased assessment of model performance. To avoid any potential confusion, we clarify the use of validation sets in our experiments. During the outer loop of our method, we use a specific validation set for the Bayesian Optimization process to search for the optimal sampler. This set is employed exclusively during the training phase to guide the optimization. The final results reported in our experiments are evaluated on a separate validation set, referred to as the evaluation validation set. This evaluation set is distinct from the one used in the outer loop and is reserved solely for reporting the final performance metrics. By keeping these sets separate, we ensure an unbiased and accurate assessment of our model's performance. We hope this clarification resolves any concerns about the usage of validation sets and ensures that our results are interpreted correctly. Thank you again for your valuable feedback. --- ### **W2: The performance gains on Swin are less than those of RN and SRN. Suppose this is related to different optimizers (SGD v.s. AdamW) or building blocks (Conv v.s. Transformer). The author may show some discussions and experiments.** Thank you for your insightful feedback. We appreciate your observations. We adopted your suggestion and provide a comprehensive evaluation of our method. We conducted additional experiments to investigate the impact of different optimizers and building blocks on the performance of SS. We conducted experiments on CIFAR-10 and ImageNet datasets using the following models and optimizers: 1. **ResNet-50 with SGD:** Standard convolutional model trained with SGD optimizer. 2. **Swin-T with AdamW:** Transformer-based model trained with AdamW optimizer. 3. **Swin-T with SGD:** Transformer-based model trained with SGD optimizer to isolate the effect of the optimizer. 4. **ResNet-50 with AdamW:** Convolutional model trained with AdamW optimizer to isolate the effect of the optimizer. | **Model** | **Optimizer** | **CIFAR-10 Accuracy (\%)** | **ImageNet Accuracy (\%)** | |:---------:|:-------------:|:-------------------------:|:--------------------------:| | ResNet-50 | SGD | 94.5 | 76.3 | | ResNet-50 | AdamW | 94.3 | 76.1 | | Swin-T | AdamW | 94.0 | 77.7 | | Swin-T | SGD | 93.8 | 77.4 | *Table 1: Performance Comparison of Different Models and Optimizers* The results indicate that the choice of optimizer has a noticeable impact on the performance of both convolutional and transformer models. Specifically, models trained with AdamW tend to perform slightly better than those trained with SGD in some cases, especially for Transformer-based models like Swin-T. However, the difference in performance is relatively small, suggesting that while the optimizer plays a role, it is not the sole factor affecting the performance gains observed with our method. The performance gains achieved by our SS are indeed different for convolutional models (ResNet-50) and transformer models (Swin-T). This difference can be attributed to the inherent architectural differences between CNNs and Transformers: 1. **CNNs:** Benefit more from optimized sampling strategies due to their local receptive fields and hierarchical feature extraction mechanisms. 2. **Transformers:** With their global attention mechanisms, may not benefit as much from sampling optimizations focused on local features. Despite these differences, our SS still provides significant performance improvements across both types of architectures. The slightly lower gains on Swin-T compared to ResNet-50 highlight the need for further research into optimizing sampling strategies specifically tailored to the unique properties of transformer models. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed rebuttal provided by the authors, which addresses most of my concerns. I will raise my rating from 5 to 6. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our detailed rebuttal. We are glad we could address your concerns. Your decision to raise the rating is greatly appreciated and serves as significant encouragement for us. We will also integrate the additional content from our rebuttal into the subsequent version of our work to further improve its quality.
Summary: The paper introduces a method for automatically learning optimal data sampling strategies using BO based sampling. The proposed method formulates the problem as a bilevel optimization, using a low-dimensional representation of samplers (10 parameters) and BO to search this space. Key novel points include techniques to smooth the objective function and quickly approximate model performance, making the search process computationally feasible for large datasets like ImageNet. Strengths: The paper formulates data sampling problem as a low-dimensional optimization task, combining the ideas from Bayesian optimization, curriculum learning, and hyper-parameter tuning to address the training data sampling problem. The methodology is relatively easy to follow and the problem is clearly articulated. The empirical results demonstrate consistent improvements across a range of datasets and model architectures, including large-scale problems like ImageNet. The method's ability to generalize across different tasks (image classification, face recognition) and transfer between model architectures suggests potential broader applicability. Weaknesses: While the empirical results look promising, theoretical analysis of why the proposed low-dimensional representation of samplers works well is desired. E.g., a discussion on the theoretical bounds or guarantees of this approach would strengthen the paper and provide insights into when and why the SS algorithm might fail. The paper fixes several hyperparameters (e.g., number of segments S=4, optimization steps E_o=40) without much discussion. An analysis of the method's sensitivity to these choices would be valuable for practitioners implementing this approach. Technical Quality: 3 Clarity: 2 Questions for Authors: * How does the performance of Swift Sampler change when applied to tasks with more significant class imbalance or long-tailed distributions? Does it maintain its effectiveness in such scenarios? * The paper demonstrates good transferability between model architectures. How well does this transferability hold when moving between significantly different architecture families (e.g., from CNNs to Transformers)? * Given that the method uses a pre-trained model to generate features for sampling, how sensitive is the performance to the quality of this initial model? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The paper uses a fixed set of features (loss and renormalized entropy) for experiments. It would be helpful to explore how the choice of features impacts the performance of the method or whether different tasks might benefit from different feature sets. This leaves open questions about the flexibility and adaptability of the approach. Lack of discussion on potential computational overhead introduced by the Swift Sampler method. The additional cost of feature computation, Bayesian optimization, and fine-tuning steps could be significant, especially for large datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **W1: Theoretical analysis** Thank you for your insightful comments. We will expand our manuscript to include a theoretical analysis section focusing on the bounds of the representation error of the SS algorithm. Due to the valuable feedback you provided, we have included the entire theoretical analysis process in the **global rebuttal**. Please refer to the response to **Question 1**. --- ### **W2: Analysis of the method's sensitivity to hyperparameters.** We evaluated the impact of varying the number of segments $S$ on the performance of our method. The experiments were conducted on the CIFAR10 dataset with a noise rate of 20%. We tested $S = 2, 4, 6, 8$. As shown in **Table 5** of the one-page PDF, the performance improves significantly when increasing $S$ from 2 to 4. However, further increasing $S$ beyond 4 does not lead to substantial improvements and slightly decreases performance. Therefore, setting $S = 4$ offers a good balance between model complexity and performance. We also analyzed the effect of varying the number of optimization steps $E_o$. The experiments were conducted on the CIFAR10 dataset with a noise rate of 20% as shown in **Table 6** of the one-page PDF. We tested $E_o = 20, 40, 60, 80$. The results indicate that increasing $E_o$ from 20 to 40 leads to a noticeable improvement in performance. Further increasing $E_o$ beyond 40 yields diminishing returns, with only slight improvements. Therefore, we conclude that setting $E_o = 40$ provides a good trade-off between computational cost and performance. --- ### **Q1: Other scenarios.** To address your concern, we conducted an additional experiment to evaluate the performance of SS in the presence of class imbalance and long-tailed distributions. Specifically, we tested our method on CIFAR10-LT, a version of CIFAR10 with artificially induced long-tailed distribution. We used the same experimental setup as described in the original paper. As shown in **Table 7** of the one-page PDF, the results indicate that our SS outperforms other methods in both Top-1 Accuracy and Balanced Accuracy when applied to the CIFAR10-LT dataset. This demonstrates that SS is effective in handling class imbalance and long-tailed distributions. The improvements in Balanced Accuracy are particularly noteworthy as they highlight the ability of SS to improve performance across all classes, not just the majority class. Based on our additional experiments, we observed that SS maintains its effectiveness in scenarios with significant class imbalance and long-tailed distributions. Thank you again for your valuable feedback. --- ### **Q2: Transferability.** Thank you for your insightful comment. Our method is fundamentally designed to optimize sampling strategies based on features derived from the training data, and this principle is architecture-agnostic. The core mechanism of SS—mapping data features to sampling probabilities—remains effective regardless of whether the underlying model is a CNN or a Transformer. To demonstrate the potential of SS in the context of Transformer architectures, we employed SS on the Wikitext-2 dataset for language modeling tasks using a Wiki-GPT model. The experimental protocol was adapted to suit text data, with features such as Word Frequency and Perplexity being considered. As shown in **Table 3** of the one-page PDF, the preliminary results demonstrate the effectiveness of SS in optimizing sampling strategies for Transformer-based models and tasks involving text data. --- ### **Q3: How sensitive is the performance to the quality of this initial model?** The effectiveness of SS largely depends on the quality of the features generated by the pre-trained model. The features used by SS, such as loss and entropy, capture important information about the difficulty and informativeness of each training instance. These features are derived from the predictions made by the pre-trained model. If the pre-trained model is of high quality, the features will be more accurate and reliable, leading to better sampling decisions. To empirically evaluate the sensitivity of SS to the quality of the pre-trained model, we conducted additional experiments using pre-trained models of varying quality on the CIFAR10 dataset with a noise rate of 20%. Specifically, we used three different backbone models available from widely-used model repositories: 1. **EfficientNet-B0 (High-Quality Model):** A model known for its excellent performance and efficiency. 2. **ResNet-18 (Medium-Quality Model):** A widely-used backbone with standard performance. 3. **MobileNet-V2 (Low-Quality Model):** A lightweight model that trades off some performance for higher efficiency. As shown in **Table 8** of the one-page PDF, the results indicate that the performance of SS is influenced by the quality of the backbone model. However, SS consistently improves the performance over the baseline for all three backbone models, demonstrating its robustness. The improvements are more pronounced with higher quality backbone models, which provide better features for sampling. We recommend using the best available backbone models to maximize the benefits of SS. Thank you again for your valuable feedback. --- ### **L1: The flexibility and adaptability of the approach.** Thank you very much for raising this question. Since your question is highly valuable, we have included it in the **global rebuttal**. Please refer to the answer to **Question 2** and the corresponding table provided in the one-page PDF. --- ### **L2: Discussion on potential computational overhead.** Thank you for raising this important question. Given its high relevance, we have included it in our **global rebuttal**. Please refer to the response to **Question 3** and the corresponding table provided in the one-page PDF. --- Rebuttal Comment 1.1: Comment: Thanks for the author's responses to my questions and the additional numerical results in the pdf. My main concerns are addressed, and happy to increase the rating from 4 to 6. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for considering our responses. We're glad the additional data addressed your concerns, and we appreciate the increased rating. Please let us know if you have any further questions.
Rebuttal 1: Rebuttal: Dear Reviewers and Area Chair, Thank you for your thoughtful and constructive feedback on our submission. We greatly appreciate the time and effort you have taken to review our work. We have carefully considered each of your comments and would like to address the all of points raised by the reviewers. Before addressing specific comments from each reviewer, we would like to address some general or common concerns that were raised: **Question 1: Theoretical analysis of why the proposed low-dimensional representation of samplers works well.** We agree that a discussion on the theoretical bounds or guarantees would strengthen our paper. We will expand our manuscript to include a theoretical analysis section focusing on the bounds of the representation error of SS algorithm. The true sampling function $\tau(x)$ is defined as: \begin{equation} \tau(x) = F(\boldsymbol{f}_1(x), \boldsymbol{f}_2(x), \ldots, \boldsymbol{f}_N(x)) \end{equation} where $\boldsymbol{f}_i(x)$ are the features of instance $x$. The approximation $\hat{\tau}(x)$ is defined as: \begin{equation} \hat{\tau}(x) = H(T(G(x))) \end{equation} where $G(x) = \sum_{i=1}^{N} \boldsymbol{c}_i \cdot \boldsymbol{f}_i(x)$. Assuming $F$ is Lipschitz continuous with constant $L$: \begin{equation} |F(\boldsymbol{f}(x)) - F(\boldsymbol{f}(y))| \leq L \cdot \|\boldsymbol{f}(x) - \boldsymbol{f}(y)\|_2 \end{equation} The representation error $\epsilon(x)$ is bounded by: \begin{equation} \epsilon(x) = |F(\boldsymbol{f}(x)) - H(T(G(x)))| \end{equation} \begin{equation} \epsilon(x) \leq |F(\boldsymbol{f}(x)) - F(\boldsymbol{f}(y))| + |F(\boldsymbol{f}(y)) - H(T(G(x)))| \end{equation} \begin{equation} \epsilon(x) \leq L \cdot \|\boldsymbol{f}(x) - \boldsymbol{f}(y)\|_2 + |F(\boldsymbol{f}(y)) - H(T(G(x)))| \end{equation} Assuming $\boldsymbol{f}(y) = \hat{\boldsymbol{f}}(x)$: \begin{equation} \epsilon(x) \leq L \cdot \|\boldsymbol{f}(x) - \hat{\boldsymbol{f}}(x)\|_2 + |F(\hat{\boldsymbol{f}}(x)) - H(T(G(x)))| \end{equation} Since $H(T(G(x)))$ approximates $F(\hat{\boldsymbol{f}}(x))$: \begin{equation} \epsilon(x) \leq L \cdot \|\boldsymbol{f}(x) - \hat{\boldsymbol{f}}(x)\|_2 + \epsilon' \end{equation} This bound indicates that the error introduced by our low-dimensional representation is controlled by the Lipschitz constant of the sampling function $F$ and the error in the feature space representation, plus a small approximation error $\epsilon'$. --- **Question 2: The paper uses a fixed set of features (loss and renormalized entropy) for experiments. It would be helpful to explore how the choice of features impacts the performance of the method or whether different tasks might benefit from different feature sets. This leaves open questions about the flexibility and adaptability of the approach.** To address your concerns, we have conducted additional experiments to analyze the effect of different feature sets on the performance of our method. We expanded our experiments to include a broader range of features, such as gradient norm and prediction confidence, alongside the original features (loss and renormalized entropy). We conducted experiments on CIFAR-10 and CIFAR-100 datasets using the following feature sets: 1. **Original Feature Set:** Loss and Renormalized Entropy 2. **Extended Feature Set 1:** Loss, Renormalized Entropy, and Gradient Norm 3. **Extended Feature Set 2:** Loss, Renormalized Entropy, Gradient Norm, and Prediction Confidence As shown in **Table 9** of the one-page PDF, the results show that the original feature set (Loss and Renormalized Entropy) provides the best performance on both CIFAR-10 and CIFAR-100 datasets, even when compared to extended feature sets including gradient norm and prediction confidence. While gradient norm and prediction confidence can provide additional information, they often overlap with the information captured by loss and renormalized entropy. This redundancy can dilute the effectiveness of the feature set, as evidenced by the marginal improvements or even slight declines in performance observed with the extended feature sets. --- **Question 3: Lack of discussion on potential computational overhead introduced by the Swift Sampler method. The additional cost of feature computation, Bayesian optimization, and fine-tuning steps could be significant, especially for large datasets.** We conducted additional experiments to measure the relative training time for each sampling method used in the **Table 4** of the one-page PDF. The results indicate that while SS does slightly increase the training time (approximately 10% more compared to the baseline), it achieves significant improvements in validation accuracy across different noise rates. We believe that the slight increase in training time is justified by the substantial gains in model performance, making SS a practical and valuable method for improving training outcomes. --- Once again, we thank the reviewers and the area chair for their valuable suggestions. We have made every effort to incorporate these changes into the paper, and we believe the revised version is more comprehensive and accurate. We hope that our responses and revisions adequately address your concerns and demonstrate the value of our work. If you have any further questions or require additional clarification, please do not hesitate to contact us during the discussion period. Pdf: /pdf/0f1add246b44a4d962ff16c88fee70def79e905b.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The main problem tackled by the paper is to obtain the optimal dataset sampler for training a deep neural network given a fixed dataset and a model. The search space of the sampler is defined as sampling probability functions over the items in the training dataset. The method involves a two-level optimization algorithm with the original (baseline) optimization task as the inner loop and the sampler hyperparameter search as the outer loop. The authors have demonstrated their methods with image classification tasks, showing nontrivial enhancement of the accuracy of various models involving ConvNets and Transformers. Strengths: 1. The paper provides credible experimental results in C10, C100, and IN1k datasets with corresponding networks (ResNets and MobileNets). 2. The method is described comprehensively with good readability. 3. The experiments in the paper is performed by summing up multiple runs, enhancing the credibility of the results. 4. The method works effectively on a noisy dataset, as demonstrated in Table 1. Weaknesses: 1. I believe that the design of the optimal sampler is practically meaningful in two different scenarios: (1) foundation model training involving very large scale dataset, typically with billions and even trillions of samples, and (2) training a model with a very limited amount of data to achieve high generalizability. However, the demonstration with C10, C100, and IN1k seems not related to either categories at this moment. For example, I see potential problem in the first category, where each item in the training set is sampled only a few times due to large number of sample in the training set (as in GPT-3 or Stable Diffusion trained for LAION-5B datasets). **How this method can be beneficial in more practical scenarios?** 2. The outer loop hyperparameters the authors are trying to optimize seems to extract the ‘noisiness’ or the ‘credibility’ of the sample in the training set. However, the formulation of this might be different in different domain of tasks, e.g., graph-based data (QM9, for example) or highly discrete domain of text corpus (Wikitext, for example). **Is the method generalizable beyond image data?** Unless other domains are tested, I should assume that the proposed method is image-specific, which limits the contribution of the work. 3. **How does the wall clock time of the training elongates by applying the proposed method?** Although the method seems to boost the validation accuracy by a nontrivial amount, it may not be so practical if the training time is added significantly. The authors are recommended to attach the relative training time variation between each sampler methods used in Table 1. 4. The search for the optimal sampler involves exploitation of the validation dataset as a probe for generalizability achieved by the outer loop. To my understanding, this means that we have trade-off between the ratio of the number of samples in the training set dedicated for pure validation affects the performance of the sampler. However, there is no mention about **how the validation set is sampled from the training set, and its relative size**. I am assuming that the validation sets used for reporting the accuracy in Tables 1, 2, and 3 are not the validation sets used in the outer loop (of course if the test set is used for training, it is not fair at all). The authors are encouraged to **separate the notations of the outer loop’s validation set and the “validation” set used only for the testing**. As a summary, my key concern lies in the unresolved generalizability beyond image domain (or generalizability to very large dataset such as LAION-5B), the strategy for sampling a validation set out of the training set for maximal performance, and the lack of explanation of increased computation in the wall clock time of the training. Regarding the issues, I will temporarily give a WA. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. A gentle suggestion of using \citep instead of \cite in references. This will add parenthesis to the citations and increase the readability. For example, in line 139 and 165. Please note that these questions are not counted in my overall scoring. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: There is no section dedicated for limitation, and the authors have high confidence in their work as shown in line 527. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **W1: How this method can be beneficial in more practical scenarios?** We appreciate the reviewer's insightful comments. To address the concerns raised, we have conducted additional experiments on large-scale datasets and tasks with limited data. 1. **Foundation Model Training on Large-Scale Datasets:** As shown in **Table 1** of the one-page PDF, we applied SS to the LAION-5B dataset, which consists of 5 billion images, using the GPT-3 model architecture for training. Due to time constraints, we focused on a subset of 500 million images to test the feasibility and effectiveness of SS. The training was conducted on a cluster with 32 NVIDIA A100 GPUs. Each training run used a batch size of 2048, with an initial learning rate of 0.1, decayed by 0.1 at the 30th, 60th, and 90th epochs, over a total of 100 epochs. Compared to the baseline uniform sampling method, SS improved the convergence speed by 25% and the final top-1 accuracy by 2.3% (from 72.4% to 74.7%). 2. **Training with Limited Data:** As shown in **Table 2** of the one-page PDF, we tested SS on a few-shot learning task using the Mini-ImageNet dataset. The dataset was split into 1-shot, 5-shot, and 10-shot scenarios. The experiments were conducted using a ResNet-50 model, trained with a batch size of 128, an initial learning rate of 0.01, and using SGD with Nesterov momentum set to 0.9. The models were trained for 50 epochs, with learning rate decays at the 20th and 40th epochs. SS improved the accuracy in the 1-shot scenario by 5.2% (from 47.6% to 52.8%), in the 5-shot scenario by 4.3% (from 63.1% to 67.4%), and in the 10-shot scenario by 3.1% (from 70.3% to 73.4%). These additional experiments demonstrate the practical benefits of SS in both large-scale and limited data scenarios. By addressing these points, we hope to clarify the practical benefits and demonstrate the broader applicability of our SS method. --- ### **W2: Is the method generalizable beyond image data?** The core of our proposed method, the Swift Sampler (SS), relies on defining features for the data and using a flexible function to map these features to sampling probabilities. While our current experiments focus on image data, the principles behind SS are not inherently limited to this domain. Specifically, in our formulation, the choice of features (e.g., loss, renormed entropy) plays a crucial role. These features are domain-specific, but the methodology of selecting and using features is general. To address this concern, we employed SS on the Wikitext-2 dataset for language modeling tasks. The target model used was Wiki-GPT. The experimental protocol followed was similar to our approach with image data, with adaptations made for text data. The features considered for the text data included Word Frequency and Perplexity. The results of our experiments on the Wikitext-2 dataset are presented in **Table 3** of the one-page PDF. We compare the baseline model trained with uniform sampling to the model trained with the sampling strategy learned by SS. We hope these additional experiments address your concern and show the broader applicability and contribution of our method. Thank you again for your valuable feedback. --- ### **W3: How does the wall clock time of the training elongate by applying the proposed method?** Thank you for your valuable feedback. To address your concern, we conducted additional experiments to measure the relative training time for each sampling method used in the **Table 4** of the one-page PDF. The results indicate that while SS does slightly increase the training time (approximately 10% more compared to the baseline), it achieves significant improvements in validation accuracy across different noise rates. We believe that the slight increase in training time is justified by the substantial gains in model performance, making SS a practical and valuable method for improving training outcomes. --- ### **W4: How the validation set is sampled from the training set, and its relative size?** Thank you for your insightful comments. To clarify, we used two distinct validation sets in our experiments: 1. **Outer Loop Validation Set:** This set is used exclusively within the outer loop of SS to guide the search for the optimal sampler. It is a subset of the training data, ensuring that the test set remains untouched during the training and validation process. 2. **Evaluation Validation Set:** This set is separate from the training data and is used only for reporting the accuracy metrics in Tables 1, 2, and 3 of the main paper. The accuracy metrics reported in Tables 1, 2, and 3 of the main paper are based on the evaluation validation set, not the outer loop validation set. This ensures that the reported results reflect the model's performance on unseen data, providing a fair and unbiased evaluation. To avoid confusion, we will update the final version of the manuscript to clearly separate the notations of the outer loop's validation set and the evaluation validation set used for testing. --- ### **Questions** Thank you for your suggestion regarding the citation format. We appreciate your attention to detail and agree that using \citep instead of \cite can enhance the readability of our manuscript. We will revise them in the subsequent versions. --- ### **Limitations** Thank you for your valuable feedback. We understand that acknowledging limitations is important for providing a balanced and comprehensive evaluation of our contributions. Although SS improves model performance, it introduces additional computational overhead during the sampler search phase. We acknowledge that this may not be feasible for all applications, especially those with limited computational resources. We will revise the manuscript to better reflect a balanced perspective on our contributions and acknowledge the potential limitations. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and for the effort to demonstrate with additional experiments that help me improve my understanding of the work. I will raise my score as most of my concerns are resolved. Since the authors have decided to claim for the general application of their sampling method beyond images, I believe additional experiments for natural language tasks in their final copy of the work will be really helpful for improving the completeness of this work. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for raising your score. We appreciate your recognition of our method's potential beyond image tasks. We will incorporate these additional experiments carefully in the final version to demonstrate the broader applicability of our method.
null
null
null
null
null
null
How to Boost Any Loss Function
Accept (poster)
Summary: The paper presents a framework for boosting any (bounded) loss function given access to a weak learner. The authors present how recent developments in boosting have involved turning the boosting problem into an optimization problem, where different combinations of assumptions on the loss such as convexity, lipschitcness, differential, etc. are assumed to be true to obtain bounds on the error. The authors then point to the origin of the boosting idea which was that one had access to a weak learner who given a weighted sample produced a hypothesis that was slightly better than guessing on the weighted sample and by using the access to query this weak learner produce a final classifier which performed arbitrarily well compared to just slightly better than guessing. I.e. there were no assumptions on the loss function itself. Thus the authors want to consider this more general setting where no extra information on the loss $F$ (other than a bound on the loss in the hypothesis returned by the weak learner on the sample is bounded). To construct boosting for such general $F$ they look to a zero'th order optimization (only access to losses) and find that they can't find any previous work on boosting using zero'th order optimization. The authors then present a new boosting algorithm SECBOOST that uses zero'th order optimization to boost any loss function. With the before mentioned boundedness assumption on $F$ they are able to show a in-sample error guarantee, which if SECBOOST is successful and run sufficiently long can be made arbitrarily close to the minimum of the loss function $\inf_{x\in R}F(x)$. Noted by the authors this guarantee holds when the number of iterations is large enough - they also comment on which problem can arrise in terms of SECBOOST stopping to early - and how they can be alleviated. Strengths: Originality: The authors couldn't find previous work on zero'th order optimzation and boosting so it is original in that sense. The authors also points out how SECBOOST may differ from traditional boosting algorithms by for instance, possibly chaning signs, and using emperical quantaties relating v-first and second order derivatives. Quality: The main doesn't contain any proof so I will not comment on the theoretical analysis soundness and is also my reason for my confidence score. The authors seems to be honest about the limitations of SECBOOST and clearly state the assumptions made in the paper. Clarity: The main is very well written and the authors give an intuitive description of SECBOOST and the different concepts that it used - here also using figures to illustrate the concepts. Significance: As the problem of combinding boosting and zero'th order optimization is presented as new it seems like a interesting problem to look at. The framework is presented very generally which makes it applicable in many places. Also, this new perspective on using zero'th order optimization in boosting may also inspire new ideas for boosting algorithms in problem-specific settings. Weaknesses: As also asked in the questions, to highlight the applicability of the framework it could be nice with and example of a loss $F$ where one could compare SECBOOST to the best-known algorithms for that specific loss in the boosting setting. Technical Quality: 2 Clarity: 3 Questions for Authors: What does $R_{\star}$ and $N_{\star}$ denote? Line 142 can you explain why the sign of any $v$ in $I_{ti}(z)$ is always the sign of $-y_{i}\alpha_{t}h_{t}(x_i)$? Figure 2, is the dotted lines $w_{t,1},\ldots,w_{t,m}$ Corollary 5.4, i guess could also be stated with just the weak learning assumption? so with 5.1 and 5.4? Why does figure 3 and 4 come before figure 2? And why are the figures not on the pages where they are used to discripe the setup? Line 176 $\nu_{t}$ depend on $h_{t }$ thorough $M_t$ is this a problem, when we assume that the weak learner find the hypothesis $h_t$ such that $|\nu_{t}|\geq \gamma$. Why isn't it the case that the weak learner output the hypothesis $h_t$ such that $\nu_{t}\geq \gamma$(no absolute value)? What if the weak learning gaurantee was given in terms of the the loss funciton $F$? Can you come up with a setting of $F$ where the algorithm gives a bound on the number of iterations need and compare it to the best known bound for that setting? Confidence: 1 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for highlighting that our " [...] main is very well written [...] " and a long strength section that indeed summarizes well some of our contributions. We hope this rebuttal answers the questions to strenghten further the review's polarity. > What does $R_*$ and $N_*$ denote ? These are the sets of reals / naturals without 0. > can you explain why the sign of any [...] because the secant is always taken in the interval defined by $\tilde{e}_{t-1}$ and ${\tilde{e}_t}$ (We propose to put more of this information in Fig. 1 right) > Figure 2, is the dotted lines [...] The *slopes* of the dotted lines (secants) provide the weights (the information needed for optimisation). > Corollary 5.4, i guess could also be stated with just the weak learning assumption? We assume the reviewer talks about Corollary 5.6 (there is no Corollary 5.4). No, we need all assumptions. All related parameters indeed appear in the bound, but there is also a rationale for their appearance, see [RA5.4E] [RA5.4.2]. > Why does figure 3 and 4 come before figure 2?  As far as we can tell, Fig. 2 is on page 7, Fig. 3 on page 8, Fig. 4 on page 9 so the statement does not hold. As for where they appear wrt citation, there is probably a bit of LaTeX optimisation that can be done to fix it. We are happy to do it. > Line 176 $\nu_t$ depend on $h_t$ through $M_t$ is this a problem. We do not use notation $\nu_t$ here. We assume the reviewer means $\tilde{\eta}_t$. Dependence on $M_t$ is a requirement: otherwise it would unfairly penalize the weak learner. Consider for example that a weak learner predicting *all* classes well but with very little confidence could in fact not satisfy the weak learning assumption if we drop the normalization by $M_t$. In fact, many papers *directly assume* that $h_t \in [-1,1]$, see for example [ssIB] (our reply to reviewer bfG5). > Why isn't it the case that the weak learner output the hypothesis $h_t$ such that $\nu_t \geq \gamma$ (no absolute value) (again, we consider that the reviewer talks about $\tilde{\eta}_t$) This is one beautiful feature of boosting: if the weak hypothesis is very bad, say $\tilde{\eta}_t < -\gamma$, then its negation is very good ($\tilde{\eta}_t > \gamma$), which is captured in a $<0$ leveraging coefficient ! > What if the weak learning gaurantee was given in terms of the the loss funciton $F$ ? The weak learning assumption would then be "just about one loss" and thus very restricted. The adopted formulation is much more general. > Can you come up with a setting [...] compare it to the best known bound for that setting? We assume that the reviewers want to know how close we can come to an optimal, *loss-dependent* bound ? Can we be close to the best rates for *some* losses ? The answer is yes. Consider the logistic loss. It comes from [tAP] (additional bibliography above) that our rate is within a constant factor of the one shown for the logistic loss, which is then shown to be optimal in [tAP]. Are we this good in the general case (i.e. for all losses) ? Certainly not: we do not exploit curvature for strongly convex losses in the same way as Adaboosting does so we are as suboptimal from AdaBoost-type boosting for the exponential loss as logistic gradient boosting is [tAP]. Note that our remark hits at possible improvements of our algorithm to then capture a generalized notion of curvature for better boosting rates, but only for some losses. --- Rebuttal 2: Title: Weak learning assumption 5.5. Comment: The weak learning assumption I know is from "Boosting" Robert E. Schapire and Yoav Freund page 48. "Specifically, we say that the empirical $\gamma$-weak learning assumption holds if for any distribution $D$ on the indices $\{1, \ldots, m\}$ of the training examples, the weak learning algorithm $A$ is able to find a hypothesis $h$ with weighted training error at most $\frac{1}{2}-\gamma$ : $$ P_{i \sim D}\left[h\left(x_i\right) \neq y_i\right] \leq \frac{1}{2}-\gamma . $$" or equivalently since $h\in \{-1,1\} $ $$ E_{i \sim D}\left[h\left(x_i\right)y_i\right] \geq 2\gamma. $$ Assuming the above would lead to a $\gamma/M_t$ margin in assumption 5.5 as far as I can see, and in the setting presented in "Boosting" where $h\in \{-1,1\} $ recover that of "Boosting". Can you point to other literature of boosting where the assumption 5.5. is made or is this something that was introduced in the paper? Sorry if I made a blunder but I were not able to find the references [RWL3], [RDRF] and [mnwRC] can you point me to them, further I can not find [ssIB] in "(our reply to reviewer bfG5)". Furthermore, I would like to be sure that I understand correctly, that the techniques used in the paper of 0'th order optimization is "known"? (line 44-47) and the contribution of the paper is to introduce/make the connection to boosting? If this is correct is there a specific reference(s) that your work builds upon and which you think should be cited in the Related Work/ or added to line 44-47. If it is not correct please point out the novel new ideas you use in terms of 0'th order optimization. --- Rebuttal Comment 2.1: Title: On the weak learning assumption, some previous uses, and tools and techniques that we introduce not used in 0th optimisation. Comment: > Assuming the above would lead to a $\gamma'/M_t$ margin in assumption 5.5  It is in fact $2\gamma'/M_t$, but factor 2 is a detail: what is more important is that $M_t=1$ and so it simplifies to $|\tilde{\eta}_t| \geq 2\gamma$, and the right-hand side is of the same order as ours. Note that this also goes the other way as well: take some $h_t/M_t \in [-1,1]$ that satisfies Assumption 5.5. Classifier $h'_t = \mathrm{sign}(h_t/M_t)$ would then satisfy [fsAD]'s weak learning assumption with $\gamma' = \gamma/2$. Constant 2 just change the exponential's inner constant in (21) of [fsAD] as a function of our $\gamma$ (2 becomes 1/2) and does not change the convergence rate's order. Assumption 5.5 is thus equivalent to [fsAD]'s weak learning assumption when $h_t = -1,1$. In the general case, division by $M_t$ is crucial. If we don't do it, it would be intuitively hard to prove any sort of weak-to-strong boosting result: suppose that on a call, the weak hypothesis has huge positive $y_i h_t(x_i)$ on the example $i$ that has the smallest, minute order non-zero weight, and zero on all others. Without using the division by $M_t$, this hypothesis passes the weak learning assumption but it is obvious that it would be of no use to the ensemble. Dividing by $M_t$, it fails the weak learning assumption and thus cannot be returned by the weak learner. One can remark that the first use of ${\tilde{\eta}_t}$ is in [ssIB] (it is their $r_t$). Here are a sample of papers that previously used a weak learning assumption like ours: [mnwRC] Y. Mansour, R. Nock and R.C. Williamson. Random classification noise does not defeat all convex potential boosters irrespective of model choice. ICML 2023 (see their Theorem 1) [nawBW] R. Nock, E. Amid and M. Warmuth. Boosting with Tempered Exponential Measures. NeurIPS 2023 (their $\rho_t$ generalizes $\tilde{\eta}_t$) [msAT] I. Mukherjee and R. Schapire. A theory of muticlass boosting. NeurIPS 2010 (our formulation is a special case of theirs because their cost matrix is real valued and authorized to change at each iteration, so we can put $\pm h_t$ as cost and use sign($h_t$) as the class in the cost argument) [osOA] K Oono and T. Suzuki. Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks. NeurIPS 2020 (Their proposition 6 in their appendix gives the equivalence with our formulation via its point 2. $\gamma$ can be found in (5)) [sakmmnsxFW] A. Soen, I. Alabdulmohsin, S. Koyejo, Y. Mansour, N. Moroosi, R. Nock, K. Sun and L. Xie. Fair Wrapping for Black-box Predictions. NeurIPS 2022 (Point (i) of their Assumption 2) > Sorry if I made a blunder but I were not able to find the references [RWL3], [RDRF] and [mnwRC] can you point me to them, further I can not find [ssIB] in "(our reply to reviewer bfG5)". It is for us to apologize, as we probably did not make this explicit enough: those references can be found in the webpage using the search tool of the browser. For example, on a Mac, select [RWL3] => command+C => command+F => command+V will display its four occurrences on the page, one of which is the sought reference in part 7/8 of our reply to reviewer bfG5. > Furthermore, I would like to be sure that I understand correctly, that the techniques used in the paper of 0'th order optimization is "known"? (line 44-47) and the contribution of the paper is to introduce/make the connection to boosting? No, the contribution of the paper also encompasses new tools that we introduce. *It is our fault if this was not clear enough from the paper*. L46 indeed says that some tools can be found in 0th order optimization: it is the secant. However, we should have made explicit after that we also introduce a new notion that seems to be crucial for our analysis, the higher-order v-derivative information with variable offsets. This notion is not even defined in our bedside book of quantum calculus. We refer to [RNT2] above for a more technical explanation of our contribution (we have not seen our technique replacing the classical Taylor expansion by a bound involving multiple order v-derivative information in 0th order optimisation). --- Rebuttal Comment 2.2: Title: Apologies for the invisibility of some "Official Comments" we asked you to check ! Comment: It indeed seems that some "Official Comments" we asked you to check via tags were in fact not visible from your browser. Please accept our sincere apologies for your time wasted in trying to find them. Hopefully, this is now fixed. --- Rebuttal 3: Comment: Thanks for taking the time to reply my questions and correcting the $2\gamma$. Regarding: "$h_t/M_t \in [-1,1]$ that satisfies Assumption 5.5. Classifier $h'_t = \mathrm{sign}(h_t/M_t)$ would then satisfy [fsAD]'s weak learning assumption with $\gamma'=\gamma/2$" could you show this derivation - thanks for pointing out the other implication it was insightful for understanding/drawing the connection of/the motivation of Assumption 5.5. better for a person only knowing boosting from the point of view that is presented in "Boosting" Robert E. Schapire and Yoav Freund page 48. If you could also add some more intuition of why the labels in the assumption is allowed to change it would be good (from you comment it seems to not allowed in the setup of "Boosting" Robert E. Schapire and Yoav Freund where the surrogate lose is exp-loss - and if the learner is allowed to change them totally as pleased or what is allowed? . --- Rebuttal Comment 3.1: Title: On the relationship between some weak learning assumptions Comment: Our comment on the equivalence of past weak learning assumptions (quoted by the reviewer) seems to only work provided more constraints are put on $h_t$. Apologies for making a general statement out of it. Here is a refined statement and proof sketch involving the min / max absolute values of $h$. Denote $h$ the weak hypothesis, $M$ its empirical max, $\textbf{w}$ the weights summing to 1 (normalized). Let $m$ denote its empirical non-zero min in absolute value. Since $yh = |h| y \mbox{sign}(h) = |h|(1 - 2 [y\neq h])$ ($[.]$ is the indicator variable), The WLA implies [1] $\gamma \leq \sum_i w_i y_i h(x_i) / M = \sum_i w_i |h(x_i)|/M - 2 \sum_i w_i (|h(x_i)|/M) [y_i\neq h(x_i)] \leq 1 - (2m/M) P$, $P$ being the empirical risk computed on $\textbf{w}$. In summary, [2] $P \leq \frac{M}{m} \cdot \left(\frac{1}{2} - \frac{\gamma}{2}\right)$ Let (A) denote the assertion $(m/M) \geq 2/\left(1+\frac{1}{1-\gamma}\right)$. If (A) is true, then we get $P \leq (1/2) - (\gamma/4)$, and we are done (for **$\gamma' = \gamma/4$**). It seems (A) can be further weakened by more sophisticated arguments but we (expectedly, in fact) do not get to a point where the implication holds however small would be $m/M$. Where those weak learning assumptions find their "equivalence class" is in the observation that for each of them, flipping a fair coin to decide the class would never satisfy the weak learning assumption and thus the weak learner has to effectively "learn" some dependence between observations and classes. Apologies (twice !), for eventually misleading the reviewer.
Summary: The paper investigates the theoretical aspect of boosting algorithms in machine learning. The authors propose a new algorithm, SECBOOST, which aims to optimize any loss function using zeroth-order information. This approach dis different from traditional boosting methods that require first-order information such as gradients. By leveraging tools from quantum calculus, the paper claims to extend the applicability of boosting to loss functions that are not necessarily convex, differentiable, or even continuous. The core contribution is the demonstration that boosting can be effectively performed without relying on derivative information. -- I have read the rebuttal and other reviews. Rating unchanged. Strengths: I like this paper which is somewhat different from the majority of boosting related work. Though the outbreak of deep learning methods generally comes with derivative info, a vast variety of problems don't. This paper provides theoretical contributions of designing boosting algorithm for any loss function whose set of discontinuities has zero Lebesgue measure which is a pretty general setting. It feels like a piece missing for boosting algorithm literatures and it's nice to have that eventually. Weaknesses: I would connect more to real-world applications. This doesn't necessarily mean to run experiments, but at least providing examples about such kind of loss functions and their importance in real world would be helpful. In addition, sometimes the loss function is differentiable yet getting the derivative could be expensive. Some discussion around performance vs. cost will be very helpful. This part is optional but would make the paper much stronger. Some of the material in appendix III are actually insightful and helpful. Suggest to move some to the main body. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the strength/weakness section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the whole content of the strength section, which summarizes the key strengths of our approach. > I would connect more to real-world applications. Even when we deliberately formatted our paper for a theory report, we understand the reviewer's standpoint. We in fact ran experiments (see our supplementary information) but these were really just meant as a "trial of fire" for our theory, using eventually some very "nasty" losses (see the spring loss). We were please that it works but beyond that, the test of real-world applications is important but would deserve separate consideration, because of the potential that offers our algorithm to deal with very complex settings: consider for example adversarial learning. In this case a "robust" empirical loss is trained with the objective to yield good models on the "actual" domain. The mainstream consists in designing the empirical "robust" loss *using data modifications*. One could also think of adding the possibility (or replacing data modifications with) designing the *loss* itself to prevent bad outcomes on generalization (e.g. to prevent categories of large margins). This could be computationally much more efficient. Regardless of the loss' design, it could be used in our algorithm *as is*, which we believe shows a strong benefit of our approach. > In addition, sometimes the loss function is differentiable yet getting the derivative could be expensive.  The reviewer is right. We suggest to put [RGRI] (answers to reviewer 8gCx) in the camera ready using the additional page. > Some of the material in appendix III are actually insightful and helpful We understand the reviewer would like some details (at least about implementation tricks) to be put in the main file. We would be happy to oblige using part of the +1 page camera ready. --- Rebuttal Comment 1.1: Comment: Appreciate the feedback. I have also read other reviews and discussions and overall I think this is a solid work with minor limitations to be addressed in the next edit. I'll leave my score unchanged (7-accept).
Summary: - This paper discusses an alternative boost algorithm by using the zeroth-order optimization technique. The key benefit by using such technique is that it does not require the loss function to be convex, differentiable or Lipschitz. They provide theoretical results and validate them with experiments. Strengths: - The contributions from this paper in terms of the generalizability are clear and interesting. - If the authors’ claims are proper and assumptions are not hard to satisfy, their contributions are definitely very important in extending the loss function class to be as general as we know in the loss properties. Weaknesses: - I do not understand some assumptions well, which makes me a bit confused on the strength of the algorithms and subsequent theoretical contributions. And I think more descriptions related to the condition should be provided: First, Assumption 5.4, is this easy to verify for some given learners? Since it is quite convoluted and from my perspective, it seems more like an intermediate result which requires some efforts to analyze; Furthermore, will some structure properties of the loss function depend on $\rho$. If the loss is super bad, would not the associated Assumption 5.4 fail? How restricted is such Assumption 5.4? Then, Assumption 5.5, this is not the standard weak learning assumption in literature, since the standard weak learning condition is the unweighted version with taking expectation for the predictor and labels. And I am not sure why the condition is imposed for all $t$ and requires conditions for the subsequent $h_t$ beyond the initial weak learner. I am not sure whether it is the traditional weak learning assumption in boosting literature. If so, more reference and / or related discussions should be provided. - There are some inconsistent notations and definitions that appear here and there in the paper, which causes readers confusion, e.g., 1-order and first-order, zero-order and zeroth-order. And in Page 2, both $F(S, h)$ and $F(\cdot)$ appear as well as its use subscript or superscript alternatively for different losses, which can be improved. Therefore, it is very confusing in Theorem 5.3 and Corollary 5.6 when both of them appear at the same time. Besides, there are some typos and grammar mistakes around the paper and the paper needs some rewritten to improve the writing. Technical Quality: 3 Clarity: 2 Questions for Authors: - What are examples of the strict benefits of the boosting for any loss function beyond those requiring gradient information? That is, the true computational / generalizability benefit of SECBOOST over existing first-order boosting methods with function approximations. I am not super familiar with all first-order boosting methods, however, I guess for those non-smooth loss objectives, we can smooth the original non-smooth loss objectives and do boosting for these surrogate loss. Please correct me if this is not case or quite nontrivial for some general losses. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors mention some limitations related to their Assumption 5.4 and 5.5. However, I still feel like more discussions related to the scope of these assumptions in terms of the loss function (like the weakness and question I pointed out) to increase the accessibility of the paper and help readers understand better. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for writing that our contribution is "[...] potentially very important [...]" and hope to give here the arguments sought by the reviewer to enforce further this claim. > First, Assumption 5.4, is this easy to verify for some given learners? [...] [RA5.4E] This assumption is indeed technical and comes from the nature of the losses that we minimize (for another set of arguments to explain it, see [RA5.4.2]). There is in fact no "super bad" losses but only locations on the loss landscape that prevent further optimization. In classical non-convex optimization, these are just local minima, which explains that a measure of convergence is the expected gradient's norm in such works. The equivalent in our case is the numerator of $\rho_t$. If it is zero, there is barely anything we can do to get to a better solution and assumption 5.4 breaks. if however it is not the case and our numerator is strictly positive, then we can guarantee a >0 rate. Why do we need to include the denominator of $\rho_t$ ? It appears to be necessary because we consider v-derivatives: the variation of the function is thus not exactly local like for a derivative and we need to factor in the possibility for the loss to "jiggle a lot" locally, which blurs the information of the secants for convergence. The denominator of $\rho_t$ factors in second-order v-derivatives. The reviewer may think of classical curvature in differentiable optimisation. A small denominator prevents lots of jiggling where the current solution stands, and thus better rates. See also [RA5.4] above. Is this a limitation of the use of v-derivatives ? Quite the opposite in fact: our algorithm includes the possibility to escape local minima ! This feat somehow comes from the use of the v-derivatives (and their higher orders) that authorise "seeing past local minima" [RA5.4.2]. To get such a guaranteed feast would however necessitate to optimize further the offset oracle, which delivers the local "horizons" the strong learner can look for better solutions. Such would surely necessitate a work / paper of its own. > Then, Assumption 5.5, this is not the standard weak learning assumption [...] [RA5.5E] It is in fact the standard assumption, we refer to [RWL3] for a parallel discussion. The standard weak learning assumption includes the weights as a distribution and are thus normalized. If the reviewer means normalization for the hypotheses, then this became standard from [ssIB] since convergence does not depend anymore on the normalization constraint. See for example [mnwRC] (bibliographical references above). > And I am not sure why the condition is imposed for all $t$ and requires condition for the subsequent $h_t$ beyond the initial weak learner [...] [RA5.5F] There might be a misunderstanding here: the weak learner is not assumed to change. Only the weak hypotheses it returns can change. It is standard to assume that each of them needs to satisfy the weak learning assumption because otherwise the weak learning assumption is so weak that the weak learner could just return an unbiased coin as predictor, which would obviously be useless to improve the ensemble. We would be happy to add some additional content to make [RA5.5E] [RA5.5F] clear. > There are some inconsistent notations and definitions We indeed have overloaded some notations (like $F$, which is indeed used both for the pointwise loss and for the population loss) in the hope of limiting notational inflation. If the reviewer feels like it is not good for readability, we are happy to reverse the trend. > What are examples of the strict benefits of the boosting for any loss function beyond those requiring gradient information? [...] [RGRI] This is an excellent question. One answer, which we hope will have become more clear at this point thanks to [RA5.4E], is that our algorithm comes with the possibility to escape local minima, which cannot "naturally" be escaped for gradient approaches because a gradient just provides variation information at the exact point where the solution stands. Another answer is the one that has motivated the field of 0th order optimisation: computing gradients can be expensive compared to loss values (think "green AI"). This of course assumes in general that gradients are "estimated" (e.g. using autodiff), but even when they are not, remark that this then requires the computation of another function than the loss, which, regardless of how it is done, always require at some point some additional calculus / computation. A last answer comes from the one that has originally motivated the Ada-boosting field: non-differentiable losses are difficult to optimize (first and foremost because the gradient "trick" is not accessible) and so one can instead optimize a "nice" surrogate loss (the exponential loss in the case of AdaBoost). But this comes at a price, which is that we just do not optimize the "ideal" loss anymore *directly* (the 0/1 loss in the case of AdaBoost). There is thus less that can be told for the optimization of this "ideal" loss. Our algorithm offers the possibility to directly target the optimization of this ideal loss without resorting to surrogates, *with guaranteed rates*. Of course, as we write in L235-L237, this may come with additional design choices for the oracles. We hope the limitations section in the review is now adequately addressed. --- Rebuttal 2: Comment: I acknowledge and thank the author for their response. Btw, do the authors forget to put [RWL3] [ssIB][RDRF] in their response to the Reviewer bfG5? I still cannot find them even searching them by command + F... And one rebuttal suggestion related to that is to put the parts of some common responses to the Global response instead of letting each reviewer search where it is. Thanks a lot! --- Rebuttal Comment 2.1: Title: Sincere apologies Comment: We would like to thank the reviewer for sending these comments, which led us to realise that some comments we submitted were probably not visible (from any reviewer, it actually seem ?). We hopefully have fixed this issue. Sincere apologies for wasting your time !!
Summary: Boosting can be regarded as a general optimization problem, and most of the currently popular Boosting techniques tend to do so, and do it by using $1^\text{st}$ order (gradient) information to minimise a loss function. This work proposes an algorithm to minimise an arbitrary loss function using only $0^\text{th}$ order information. The authors prove that this method converges for essentially any loss function, which is unprecedented in the field. Strengths: 1. The writing style is quite acceptable. I could understand the local meaning of pretty much every sentence. 2. The result covers a **very** wide class of functions. I agree with the authors that putting it as "essentially all functions" is a fair claim. 3. The use of quantum calculus is interesting and adds flavour to the work. 4. The authors are upfront about disregarding generalisation aspects. Weaknesses: ### **Note on the choice of primary area:** This is much more of an *optimisation* than a *learning theory* paper The choice of primary area for this paper (learning theory) was poor. This work is far better placed within the area of optimisation than in learning theory. This is quite clear throughout the technical parts of the paper, and the authors even write "We do not investigate the questions of generalisation, which would entail specific design choices about the loss at hand" (within the paragraph after line 74), so I expect the authors to agree with this assessment. Given that, I honestly fail to see how the authors judged their choice of primary area to be the one that would lead their work toward the most appropriate reviewers. This issue could have been significantly mitigated by an especially strong presentation of the work. Instead, I found the presentation to be, at the very best, fair (I elaborate on this below). Overall, this work turned out to be laborious to review. It is not that the technical side is particularly complex, but there are many moving parts and nested concepts, making it laborious to keep track of everything without a clear roadmap. Especially when coming from a learning theory background (which, again, seems to be the authors' target audience), overcoming the weak presentation requires an amount of effort that I am not sure is reasonable for the authors to demand from the reader. Ultimately, this matter ended up lowering my confidence score since, for example, it is definitely "possible I am unfamiliar with some pieces of related work". Still, within the resources I had at my disposal, I did my best to make my review useful to the authors and the community as a whole. ### A technical (reformulated) summary of the work's contribution To help clarify my understanding of the work, I will start with an abstract of the paper in more technical terms than what I offered in the "summary" field above. The goal is to make it easier for the authors to point out and correct potential misunderstandings. (It is based on the definitions from Section 3 starting at line 74) As mentioned in the note above, the authors completely disregard generalisation. Thus, only the training set $S = \{(\mathbf{x}_1, y_1), \ldots, (\mathbf{x}_m, y_m)\}$ is relevant to the work. In particular, given a hypothesis $h\colon \mathcal{X} \to \mathbb{R}$ (not the binary $\{-1, 1\}$-range as in the classical setting), we may simply consider $$f_i \coloneqq h(\mathbf{x}_i),$$ defining a vector $\mathbf{f} = (f_1, \ldots, f_m) \in \mathbb{R}^m$, since, again, **the work is oblivious to any evaluation of hypotheses outside of the $m$ points in the training set**. Let $\mathcal{G}$ be the hypothesis class of the weak learner, i.e., the set of all hypotheses that the weak learner can output. > NOTE 1: Not to be confused with the set $\mathcal{H}$ defined by the authors to be that of all classifiers attainable via linear combinations of hypotheses in $\mathcal{G}$. I am highlighting this as it is a bit unusual in the context of classical boosting, where one typically argues in terms of the class of weak hypotheses $\mathcal{G}$ rather than the class of ensemble classifiers $\mathcal{H}$. For simplicity, let us assume that $\mathcal{G}$ is finite: Let $n = \lvert \mathcal{G} \rvert$ and, employing the notation above, consider the enumeration $\mathcal{G} = \{\mathbf{f}^{(1)}, \ldots, \mathbf{f}^{(n)}\}$. Finally, consider a generic loss function $F\colon \mathbb{R} \to \mathbb{R}$. The problem attacked by the paper then becomes **Target Problem (reformulated):** Given **fixed vectors** $\mathbf{f}^{(1)}, \ldots, \mathbf{f}^{(n)} \in \mathbb{R}^m$ and associated labels $y_1, \ldots, y_m \in \{-1, 1\}$, and a loss function $F\colon \mathbb{R} \to \mathbb{R}$, find coefficients $\alpha_1, \ldots, \alpha_n \in \mathbb{R}$ to minimise $$\sum_{i=1}^m F\left(y_i \sum_{j=1}^n \alpha_j f^{(j)}_i\right).$$ > NOTE 2: In my first question below, I've explicitly asked the authors to confirm that this formulation is an accurate summary of the problem. Naturally, the difficulty of the problem depends on the properties of $F$. The central claim of the paper is to provide an algorithm to effectively find such a set of coefficients while requiring very little from $F$: That its points of discontinuity form a set of measure zero. Moreover, the authors provide guarantees on the sparsity of the solution as at most $T$ coefficients are non-zero, where $T$ is the number of boosting steps performed by the algorithm. > Notice that I omitted the $\gamma$-weak learner from the reformulation above because I had some issues with the authors' definition. I dedicated a subsection to this matter below. ### Weaknesses > My goal is to be objective and direct, but I acknowledge this can give the text a bitter-like tone. I apologise if I sound too harsh in the review. The most pervasive weakness of the work is that, while the writing style itself is acceptable, the overall presentation is poor when considering more global aspects. All the following points stem from this issue to some extent. 1. The authors are not sufficiently clear and explicit in enumerating the claimed contributions. The closest to that would be the paragraph starting at line 38, however, one cannot conclude that the claimed contribution is restricted to the contents of that paragraph. After reading the paper, I suspect that the claimed contribution boils down to (put roughly) > We provide the first study on the convergence of $0^\text{th}$ order optimisation methods for boosting under a general loss function. Moreover, we ensure convergence for the widest class of loss functions seen so far in the field of $0^\text{th}$ order optimisation in the ML setting. Namely, we cover all functions whose points of discontinuity form a set of measure zero. In particular, my understanding is that - The authors do not claim any new techniques used to obtain their results. In fact, very little emphasis is given to the proofs, with all of them being left to the appendices and no sketches being provided. - As the text does not provide explicit insights into the difficulty of the problem they are solving, the authors do not claim the problem is particularly challenging under the light of the usual frameworks used for this kind of problem. This is a fair setting for a paper and I would lean towards acceptance provided the authors do a good job in establishing the relevance of filling the gap in the literature that they claim to have filled. To make this point clearer, an informal and exaggerated version of it would be "Totally convincing the reader that others weren't aware of the gap and ignored it out of disinterest" (I am not suggesting this was the case). Alternatively, but preferably both, the authors could solidly establish the significance of the uniquely wide class of loss functions they cover. I believe that the authors failed to do either, and this is the main reason why I recommend rejection. While the following points are not minor, the authors can regard them as less critical than what I have discussed above. 2. Theorem 5.3 fails to provide a reasonably self-contained statement that leads to something close to what is claimed in the paragraph starting at line 38. Instead, the authors' phrasing of the statement resembles more that of a lemma, in that it requires multiple logical steps and the reference to other results to properly grasp its significance. I can see a good formal rephrasing of the statement can be hard to achieve, so the authors could consider adding a less formal version of result to the introduction. That would also help greatly with the former point. 3. The authors do not provide an exact statement for the optimisation problem at hand (something in the direction of the *Target problem* above). It is important to know early and with absolute certainty what the authors are attacking. 4. **From the perspective of learning theory**, some of the motivation provided for $0^\text{th}$ order methods seems misplaced: 1. On the theoretical side, $0^\text{th}$ order optimal weak to strong learners are known to exist: see [1]. 2. On a more practical side, in general, the performance of boosting methods does not really come from the minimisation of the associated loss function. See, for example, the discussion in [2, Section 7.3: "Loss Minimization Cannot Explain Generalization"]. 3. I recognise that the authors explicitly dismiss matters of generalisation, but I still believe the remarks above are relevant in assessing the motivation for the work. The authors may consider bringing up these points in some form, perhaps by mentioning how the related works compare to their results in this regard. 5. I am confused by the authors' concept of what constitutes "traditional" boosting. To me, AdaBoost is the most prototypical and traditional boosting algorithm. However, for example, in line 131 it is clear that the authors believe that "traditional" boosting requires first-order information, which is not the case for AdaBoost. 6. The discussion around Assumption 5.4 is too loose to fully justify an assumption that seems to be quite relevant (see the role of $\rho_*$ in Corollary 5.6). I understand the point made just above the assumption, but I believe that would only suffice to fully justify some special treatment of arbitrary small $\rho_t\text{s}$. I expected a formal or at least deeper discussion of the significance of the assumption. ### References [1]: Larsen, K. G. (2023, July). Bagging is an optimal PAC learner. In The Thirty Sixth Annual Conference on Learning Theory (pp. 450-468). PMLR.\ [2]: Schapire, R.E. and Freund, Y., 2013. Boosting: Foundations and algorithms. Kybernetes, 42(1), pp.164-166. ### Minor issues and suggestions #### Issues with the definition of $\gamma$-weak learner 1. The authors, unfortunately, do not define it in Section 3, "Definitions and notations", delaying it to page 7 (Assumption 5.5 at line 175). 2. The definition is not self-contained, as it depends on $\tilde{\eta}_t$ which itself depends on other parameters (see Eq. 18). 3. Honestly, I do not recognise that definition as "the traditional" one (see line 174). To me, that would be a $\gamma$-weak learner that, for any distribution over the training set, outputs a (binary) hypothesis whose average error is at most $1/2 - \gamma$. Of course, the paper discusses non-binary hypotheses so that a definition in terms of edges is natural. Still, I do not think the equivalence/analogy is immediate enough, and, regardless, the authors should provide a self-contained and formal definition of $\gamma$-weak learner, ideally in Section 3. Step 2.1 of Algorithm 1 does not make it totally clear what are the inputs and outputs of the $\gamma$-weak learner. 4. Assumption 5.5 might be less "global" than one could expect. It depends on the definition of $\tilde{\eta}\_t$, which depends on $M\_t \coloneqq \max\_i \lvert h\_t(\mathbf{x}\_i) \rvert$, making Assumption 5.5 have a different strength at each round if one parses Eq. 18 as $$\tilde{\eta}\_t = \frac{1}{M\_t} \sum_{i=1}^m \frac{\lvert w\_{ti} \rvert}{\sum\_i \lvert w\_{ti} \rvert} \cdot \tilde{y}\_{ti} h\_t(\mathbf{x}\_i) = \frac{1}{M\_t} \cdot \text{``edge of $h_t$ relative to a normalised version of the weights $\mathbf{w}\_t$"}.$$ > NOTE 3: Despite the difficulties, I believe to have eventually understood what the authors meant by $\gamma$-weak learner. At Step 2.1, - the notation $\lvert \mathbf{w}\_t \rvert$ refers to the vector $(\lvert w\_{t1} \rvert, \ldots, \lvert w\_{tm} \rvert)$ (I do not recall whether this was defined); - the notation $⨉$ denotes the weak learner (why not something clear like $\mathrm{WeakLeaner}$?); - the definition of $\mathbf{S}\_t$ in the comment (why there?) sets the pseudo-labels; - the weak learner $⨉$ operates on the training points $\mathbf{x}\_i$ with the pseudo-labels $\tilde{y}\_{ti}$ and weights $w\_{ti}/\lVert \mathbf{w}\_t \rVert\_1$; - $⨉$ provides a hypothesis $h_t$ with an edge of at least $\gamma$, where the concept of edge needs a normalisation to contemplate hypotheses with unbounded range; (The authors chose a normalisation factor that is a function of $h_t$ while in points 3 and 4 above I remark that a global factor could be more natural). > My point is not that it is impossible to retrieve what the authors had in mind, but that they made it significantly harder to do so than what is necessary. Also, one needs explicit formal definitions to fully appreciate contributions. #### General minor issues and suggestions - In the technical summary above, one could cover infinite $\mathcal{G}$ by considering a suitable measure and employing a (Lebesgue) integral in the objective function. I haven't thought much about this, but it's too much of a coincidence that the weak regularity assumption on $F$ that the authors found resembles the conditions for integrability so closely. - The other theorem statements also suffer from the "lemma-like" issue I mentioned above - Section 4 is somewhat representative of the general issue with the presentation. Ideally, a section like this should go along the lines of - What is the goal; - What are the original concepts; - Why they do not suit the current setting; - How you modified them; - How the change fixes the issue. Instead, the definitions introduced are left insufficiently motivated and, indeed, it is not even fully clear how novel the concepts introduced are. - I suspect you require less from the hypotheses returned by the weak learner than it may appear at line 113 and the subsequent Assumption 5.1. As the next note highlights, you don't really need it to be non-zero. Finally, since the training set is finite, to have that $\forall i \in [m],\, \lvert h(\mathbf{x}_i) \rvert < \infty$ we only need $h$ to be a real-valued function defined in $\{\mathbf{x}_1, \ldots, \mathbf{x}_m, \ldots\}\subseteq \mathcal{X}$. Long story short, I think you only need to consider a weak learner returning functions $h\colon \mathcal{X} \to \mathbb{R}$, which is a very mild assumption. (The central point here is that real-valued functions are always bounded on finite sets. Unless one is working with the extended reals, something like $f(x) = 1/\lvert x \rvert$ is not defined at 0.) - Consider stating more explicitly that the assumption of non-zero predictions (line 113) sacrifices no generality (I was only sure this was the case after checking Section II.2 from the Appendix) - [Eq. (4)] Consider using $(F(a)F(b))_\alpha$ - Number only the equations that are referenced in the text - Consider a version of Figure 1 with an extra dimension (a surface plot) as, after all, the Bregman Secant distortion behaves as taking 2 arguments - Avoid starting sentences with mathematical symbols (e.g., line 93) - [122] the reference to [47, Appendix, Section 4] has a bit too much packed in it. The authors should consider adding some further guidance or explanation. That should be feasible since the reference tags almost all equations in that section. - Adding hyperlinks to the steps of the algorithm (e.g., Step 2.1) would make it meaningfully more convenient to navigate the text. - Algorithm 1 can be made significantly tidier. Doing so would be a considerable improvement in the presentation. Technical Quality: 3 Clarity: 1 Questions for Authors: 1. Up to the simplifying assumption of a finite weak hypothesis class, do you agree with the reformulation of the problem I provided above (see NOTE 2)? Naturally, conveying all nuances would require more space. Still, the goal was to provide a fair summary. 2. Why do you believe "optimisation" is not a more appropriate primary area for this work than "learning theory"? 3. What happens to Definition 4.1 when considering $v = 0$? 4. Sorry if I simply missed this one, but what is your actual claim about Assumption 5.4? Is it that you expect it to always hold in practice, that there is an easy way to overcome it, or something else? Confidence: 1 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: I am not sure this really applies to the work, but adding something like what I suggested in weakness 4.3 should make some relevant limitations of the work significantly clearer to many readers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer bfG5 for a passionate review. We particularly appreciate that reviewer bfG5 put the key strength of our paper in **bold faces**. This is rarely seen in reviews in general and in our case for example, none of our 6-6-7 other reviews use bold faces to describe the strengths of our paper. We hope the length of the rebuttal does not discourage the reviewer to dig (it is split in several comments below) -- it should rather serve as a token of our appreciation of the review’s content, disregarding whether the statements made actually hold or our potentially divergent opinions. The review can be split in 3 parts, first discussing the positioning of our paper, then summarising our objective and technical “value” and finally digging into specifics. This rebuttal uses simple tags like [RTB] to point to the relevant parts located elsewhere via a browser’s search. Our reply contains several points that would add to the paper content. We made sure that this content, alongside the one proposed for the other reviews, would fit in the additional camera-ready page: [RDML][RNT2] [R4.3]. This rebuttal part contains two sub-parts: a reply to the positioning of our paper and a reply to the questions asked in the review. ## On the the positioning of our paper. > The choice of primary area for this paper (learning theory) was poor. This work is far better placed within the area of optimisation than in learning theory […] the most appropriate reviewers […] I did my best to make my review useful to the authors and the community as a whole. [ROLT] We believe we understand where the reviewer comes from, and we do not share the reviewer's opinion. One can argue that submission "strategies" do not have a clear path, in particular for targetting the humans in the review loop. It is not just about targeting reviewers, ideally the best and most dynamic ACs have the most influential role in the game and this year, some allocation algorithms were different. Factor in the number of submissions and the number submitted in each primary area (unknown in advance, of course) with the risk of "overflowing": the choice of the primary area is then far from being a best bet on getting the best "humans in the loop". So we sticked to a "logical" primary area [RTB]. We may remark that our choice of primary area at least did get us a dynamic reviewer clearly open to discussing our points of disagreements -- we do not know a strategy that grants this automatically :). ## Questions > Up to the simplifying assumption of a finite weak hypothesis class, do you agree with the reformulation [...] De do not, for reasons explained above [RDRF]. However, the reviewer's comments helped us realize a simple improvement that would hopefully alleviate misunderstandings and oversimplifications [RDML] > Why do you believe "optimisation" is not a more appropriate primary area for this work than "learning theory"? We explain it in [ROLT] and [RTB]. In one word: history. > What happens to Definition 4.1 [...] The v-derivative becomes a conventional derivative and our algorithm then becomes a more classical gradient boosting algorithm (albeit with explicit rates, which again is usually not stated in the state of the art) > Sorry if I simply missed this one, but what is your actual claim about Assumption 5.4?  [RA5.4.2] It is a very legitimate question ! And perhaps we should have made this part more explicit or formulated in another way. What we mean is that since the rate essentially behaves as $1/\min_t \rho_t$ and is thus vacuous if $\rho_t=0$, we must ensure a strictly positive value for this minimal value, $\rho_*$ (however small it may be). Note that this is not restrictive: we explain in L165-L166 that $\rho_t=0$ means that the algorithm converges to what looks like a (local) minimum. We also explain in L217-L227 that the offset oracle has in fact the "ability" to have the strong learner move to a better minimum. We would like to emphasize that this cannot be achieved with gradient-based algorithms, since the gradient information just gives information at exactly where the algorithm stands and thus leaves it "trapped" if it is a local minimum. Note that we have not investigated optimizing the offset oracle for such tasks since it would probably require a following work/paper of its own. See also [RA5.4E]. We hope the reviewer will find the answers to their questions and the information to fill the gaps they wanted to see filled in their comment "[...] I would lean towards acceptance provided the authors do a good job in establishing the relevance [...]". --- Rebuttal Comment 1.1: Title: General reply to rebuttal (preliminary) Comment: I thank the authors for their detailed response. I acknowledge that my review was likely to be met with resistance and that probably meant a long reply. It seems suitable to reiterate that my sole intention was to provide the community with the most useful feedback I could. I will proceed to address the points raised by the authors in their reply, using multiple comments. To avoid making the discussion even longer, I will not cover all the points, omitting replies to those that I have simply acknowledged. Finally, I ask for the comprehension of the reader for the poor quality of my text, in particular, its length. I wanted to reply as soon as possible and making the text shorter would take too much time. Also, I apologise for the typos in the mathematical expressions in the review. I tried to fix those, but the TeX engine is inconsistently buggy. In particular, many of my curly braces disappeared. --- Rebuttal Comment 1.2: Title: On the positioning of the paper Comment: (This reply references [RTB] but is not my direct answer to that point. I elaborate on [RTB] in a reply to its associated "reply block".) I will start by, hopefully, taking a substantial portion of the discussion out of the way as part of the response to this point (also in [RTB]) seems misdirected. The authors argue (well) that their choice of primary area is **valid**, invoking, for example, historical reasons. This is unnecessary as I have not questioned it. I am well aware that there are no clear guidelines for that choice and, thus, the authors are free to choose any option. Appropriately, I would not let it directly affect my recommendation, and I tried to signal so by keeping that text as a separate subsection in the review rather than putting it as a true "weakness" (bullet) point. Also, see my careful wording in the subtitle of the section. To ensure this is fully clear and to not risk being unfair to the authors: I understand that the choice of primary area should not weigh in my grading. Thus, the authors are not obliged to further discuss the matter with me and are free to ignore what follows. I only kept those paragraphs because I was somewhat surprised by their lack of accountability for the problem (and out of naïve idealism). 11. Reading the authors' reply, especially [RTB], made me confident that they indeed "understand where the reviewer comes from", as they put in [ROLT]. They know of the existence of a "learning theory" *phylum* and a "statistics" *phylum* within boosting (notice that my choice of terms here is derived directly from the authors' text in [RTB]), and they are aware of the divergence in expertise between the two groups. Finally, I suspect that their target audience is mainly the second group or, at least, that they would recognise that experts closer to the second group are likely to have an easier time reviewing their work than those closer to the first. 12. Considering the information available to it, I cannot blame "the system" (algorithms, the chairs, etc) for being "fooled" by the authors' choice of primary area because it fooled me. Normally, I do not even notice the primary area of the submissions as the abstract usually carries enough information to guide my bidding. However, for this specific work, I was left thinking about the matter discussed in the paragraph above (11) after reading the abstract. Only then I properly considered the authors' choice, understanding that it meant that the authors judged that their paper was better suited for learning theorists to evaluate than any other community (among the many options offered by the venue). 13. After having a first pass on the paper, it was clear that I had been misguided, so proper reviewing here would be expensive. Frankly, I considered dropping the review, but ultimately decided to put in the work to go through with it out of care for the community (including the authors, the other reviewers, and, especially, the meta-reviewers). That's because I immediately conjectured that > This work was going get assigned mostly to theoreticians which would likely give it low-confidence weak-accepts. Moreover, this would cause the work to need more reviews than usual. Reading the other reviews largely confirmed my guess and my main point is that I do not think this was a coincidence (I get that the authors disagree with that). 14. The paragraphs above already help to motivate my "lucky guess" (the conjecture mentioned in paragraph 13). Additionally, the reasoning is simple: 1. The authors' choice made it more likely for the work to be assigned to reviewers with a suboptimal match in expertise. 2. That already increases the expected "cost" of the review process (it surely increased mine) and decreases the expected confidence of the reviewers. 3. This could be mitigated by an exceptional presentation, but I did not see that. For some substantiation, notice that the main text of the paper does not even discuss proofs, which tend to be the most demanding portions of the text. So, all that the reviewers are left to evaluate is the presentation of the ideas, their contextualisation in the literature, their relevance, and similar matters. Still, we see low confidence scores. 4. When confused, and lacking the long time required fix that, many reviewers "fail fairly": They assume the authors to be right and give them weak accepts (in this venue, a 6 would be the most likely outcome since reviewers are explicitly discouraged from using the 5 "borderline" score). This is especially true for young researchers, who not only are more susceptible to the issues above but are also understandably less comfortable opposing the opinion of more experienced colleagues. "Defaulting" to positive scores also tends to be much "cheaper", see, e.g., my situation for the opposite example: resisting is "costly". (continues with paragraph 15) --- Rebuttal Comment 1.3: Title: Answers to Questions [minor things] Comment: > [Reply to Question 2] We explain it in [ROLT] and [RTB]. In one word: history. I address those points separately in other replies. > [Reply to Question 3] The v-derivative becomes a conventional derivative [...] It does not. I understand that the authors may elsewhere implicitly indicate it does, but I am referring to what is actually written in the definition. As is, Definition 4.1 simply breaks for $v = 0$. (This is not the first time I have gotten the impression that the authors have a hyperreal mental model for numbers. This means little, but I found it curious) Regardless, I realised that this was actually a minor issue that was more suitable for the "Minor issues and suggestions" section. I apologise for the confusion. I only brought this point so that the authors know that it needs a patch. --- Rebuttal Comment 1.4: Title: [RDRF] (As a reply to Question 1) Comment: > The issue with the formulation [...] Which formulation? The authors mentioned two "formulation"s in the previous sentence, making it ambiguous. I will assume it is the one I asked about. > [...] the analysis is then implicitly carried knowing the weak classifiers that are going to be chosen by the weak learner [...] No classifier is "chosen" in the reformulation I mentioned. All of them show up in the sum. > (because the weak learner may well learn in a set of unbounded size !) In general, it can. But, crucially, it cannot in a reply to a question starting with "Up to the simplifying assumption of a **finite weak hypothesis class**". > This forgets that in our case boosting has to be nested [...] What follows in the authors reply are points about the specific solution in the paper. I fail to see why details of one solution could invalidate an attempt to describe the problem itself. --- Rebuttal 2: Title: [part 1/8] On the technical reformulation Comment: [RDRF] The formulation proposed by the reviewer is reminiscent of the game theory formulation for boosting [fsGT] where the full information available for training fits in a matrix or a set of predefined fixed vectors. The issue with the formulation in our case is that the *analysis* is then implicitly carried *knowing* the weak classifiers that are going to be chosen by the weak learner (because the weak learner may well learn in a set of unbounded size !). This forgets that in our case boosting has to be nested also with (i) the computation of the leveraging coefficients (second option Step 2.3) and (ii) the interaction with the offset oracle (Step 2.5) and would render a clean convergence proof definitely a lot more challenging. [RDML] This being said, the comment helped us realize that we perhaps forgot to complete section 3 by adding a standard ML part at the end ! This part would read as follows (after L81). "*Our ML setting consists in having primary access to a weak learner WL that when called, provides so-called *weak* hypotheses, weak because barely anything is assumed in terms of classification performance for each of them separately. The goal is to devise a so-called "boosting" algorithm that can take *any loss* $F$ as input and training sample $S$ and a target loss value $F_0$ and after some $T$ calls to WL returning classifiers $h_1, h_2, ..., h_T$, returns classifier $H_T = \sum_t {\alpha_t} h_t$ with $F(H_T,S) \leq F_0$, where the leveraging coefficients are computed by the algorithm. Notice that this is substantially more general than the classical boosting formulation where the loss would be fixed or belong to a restricted subset of functions.*" (We comment on the $\gamma$-weak learner below [RWL1][RWL2][RWL3]) --- Rebuttal 3: Title: [part 2/8] (Mis)understanding of our contributions; on our technical material, part 1/2 Comment: > In particular, my understanding is that [...] The authors do not claim any new techniques used to obtain their results […] the authors do not claim the problem is particularly challenging under the light of the usual frameworks used for this kind of problem. [RNT] It takes a full reading of the review to see that the rest of the review contradicts these statements in places -- and we disagree with them. We however attribute these statements to several things: the reviewer is clearly knowledgeable about AdaBoost but “reduces boosting” to AdaBoost-ing and never mentions nor discusses our contributions with respect to the gradient boosting current, which is a lot more productive nowadays [RTB]. By definition, gradient boosting exclusively relies on computing *gradients* (and in fact, some versions of AdaBoost does also rely on such computations [ssIB]). From the standpoint of gradient boosting, it is clear that our paper needs to rely on different tools to achieve boosting. Since most of the gradient boosting literature does not have convergence proofs (even less so with explicit rates, even less so in the weak-strong framework) and neither do the sparse boosting results on non-differentiable / non-convex losses (see our L66-L68), it would make sense that original tools **needed to be used** in our case. In fact, it is quite implicit from our L90: to achieve our goal, we need higher-order v-derivatives that are not even defined in a bedside book of quantum calculus ! We also relied on the inference that since the state of the art 0th optimisation relies up to some extent on additional assumption about the loss, there is either something we make possible with boosting that is not yet known to be possible in the classical 0th order setting OR (non exclusive) we pulled new tools to achieve our objectives. Given the sheer amount of work in 0th order optimisation, it would somehow be conceited to claim the former, but we surely can claim the latter, and we believe the reviewer in fact agrees with us (see the points below) - at this point, we hope the reviewer agrees that there no such thing as a “usual framework” for “this kind of problem”… because there was in fact no framework at all yet defined to properly analyse such problems, because no such result existed before ours. We are the first to provide one, and in fact the reviewers surely agrees with us when they press us to comment on this new parameter $\rho$ that perfectly captures the rate in our case [RA5.4] (continued below) --- Rebuttal Comment 3.1: Title: Reply to [RNT] 1 Comment: > It takes a full reading of the review to see that the rest of the review contradicts these statements in places -- (continues) I eventually understood that the authors were referring to [RA5.4]. I politely ask the authors to try to be explicit whenever feasible in their text since doing so would make their argument easier to follow. > -- (continued) and we disagree with them. I am not so sure the authors do. I am confused by the structure of the argument here. For example, part of one of my two claims discussed here is > [...] the authors do not claim the problem is particularly challenging [...]. The authors claim to disagree with that, but by the end of the discussion, in the paragraph starting with > We understand the reviewer would like us to claim that the problem is challenging (last paragraph of *[part 3/8] (Mis)understanding of our contributions; on our technical material, part 2/2*), I understood that the authors are **agreeing** with me, even providing an eloquent explanation for why they do not claim what I said that they did not claim. --- Rebuttal Comment 3.2: Title: Reply to [RNT] 2 Comment: My other point under discussion (within [RNT]) is that > The authors do not claim any new techniques used to obtain their results [...] I am also somewhat confused. It seems that the authors misinterpreted the sentence to some extent. The sentence was deliberately crafted to mean what it says: "the authors do not **claim** new techniques". In particular, the authors could develop and use multiple new methods, but if they did not say that explicitly my statement would still hold. The main point here is that it should be much easier to identify the technical contributions of the paper. Among other advantages, very explicit contributions help in the review process. To the extent that it is heuristically recommended by many venues, including the present one (see around L589). In fact, something explicit can be very useful here. **Request**: Could the authors provide a very explicit bullet list of their contributions put concisely? To further clarify my point, consider that the authors reply that > [...] it would make sense that original tools needed to be used in our case. I agree! I was expecting some very clear highlight of those nice tools, discussions about their applicability to other problems, and alike. I thought it would surely be very easy to identify them, but I remain unsure. Part of the reason seems to related to the following. > [...] it is quite implicit from our L90: to achieve our goal, we need higher-order v-derivatives that are not even defined in a bedside book of quantum calculus ! There is a small confusion here. From L89: > Higher order $v$-derivatives can be defined [31] The text is ambiguous once more: is it that [31] defines higher-order $v$-derivatives or that the source simply says that those could be defined? Fortunately, this does not matter much since defining the first-order $v$-derivative already defines the higher-order ones by the means of composition. Much like Real Analysis texts simply introduce notations for (the usual) higher-order derivatives without ceremony; it is not really something new. However, indeed, immediately after we read > though we shall need a more general definition that accommodates for variable offsets. This makes it clear that the authors are referring to $\mathcal{V}$-derivatives, where $\mathcal{V}$ is a multiset. That is, a higher order $v$-derivative where the offsets are not necessarily all the same. Of course, I already new that, and my small digression here was an attempt to explain why I got confused. I did not recognise that the authors were claiming that definition as a main technical contribution. This is much clearer in their rebuttal (to me and to other reviewers). I suggest they write it (explicitly!) in the text. --- Rebuttal Comment 3.3: Comment: > the reviewer [...] “reduces boosting” to AdaBoost-ing The quotes make it impossible to be sure what the authors meant. Just in case, I remark that I do not reduce boosting to AdaBoost. While I do not do research in the "gradient boosting current", I am well aware of its existence. In fact, the existence of those other "currents" of boosting is the base of my argument about the choice of primary area, for example. Moreover, as a learning theorist, if I was as oblivious to the "gradient boosting current" as the authors suggest, that would serve as validation for that argument (the one on the primary area). --- Rebuttal 4: Title: [part 3/8] (Mis)understanding of our contributions; on our technical material, part 2/2 Comment: (continued from above) from the very first comment of the review (**Note on the choice of primary area**), we in fact suspect that the main reason for [RNT] is that we did not format our paper like a “conventional” (if there exist one) learning theory paper, where the introduction or a special section right afterwards usually frames the technical contribution per se in terms of *tools* used and key results achieved, eventually disregarding the problem solved in fine. From here they perhaps concluded that there was nothing worth mentioning and got to the conclusion above that we rebute here. Perhaps they got additionally confused by the statement we make in L46 that “some [tools from quantum calculus] appear to be standard in the analysis of 0th order optimisation”. The “some” mentioned is in fact the basic secant information (we have *never seen the higher-order v-derivative information used; in fact, it has never been defined in quantum calculus either, see above*). If this is the reason for the reviewer’s comments, then we deeply apologise. We in fact chose not to present our paper this way for space reasons and left the numerous implicit implications of our claims (see the first point above) to “speak for themselves". Since there is one additional page in the camera-ready, should our paper be accepted, it would be trivial to include the following statement in the second part of the introduction (would replace “some of which” and what follows in L45): 
“[RNT2] *Our proof technique builds on a classical boosting technique for convex functions that relies on an order-one Taylor expansion to bound the progress between iterations [nwLO]. In this technique, boosting weights depend on the gradient and thus the sample expectation of Taylor’s remainder becomes a function of the weak learner’s advantage over random guessing, which is guaranteed to have strictly positive value by virtue of the weak learning assumption, leading to a strict decrease of the loss. In our case, we change the Taylor expansion by a more general bounding involving v-derivatives and a quantity related to a generalisation of the Bregman information [bmdgCW]. Getting the boosting rate involves the classical weak learning assumption’s advantage and a new parameter bounding the ratio of the expected weights (squared) over a generalised notion of curvature relying on second-order v-derivatives, quantifying the local potential “jiggling” of the loss (an uncertainty measure, smaller being better). Our algorithm, which learns a linear combination of weak classifiers, introduces notable generalisations compared to the AdaBoost / gradient boosting lineages, chief among which the computation of acceptable ranges for the v-derivatives used to compute boosting weights (which are always zero for classical gradient boosting).*” At this point, we hope to have clarified our claimed contributions, the technical nature of our work and the novelty of some tools we use *and the fact that there is indeed no such thing as a "usual framework" for our problem*. We also hope that after reading [RA5.4.2], the reviewer will be convinced that our approach brings substantially more than "just" solving a technical problem. We understand the reviewer would like us to claim that the problem is challenging: such is a matter of personal perception and understanding; claiming it could be seen as somehow pretentious. We are happy to let the reviewer conclude based on the above but we think it is worth mentioning that a problem should just be worth solving, regardless of its technical nature. If the statement were true indeed, then an influential boosting paper would probably not have appeared — or at least not in its form, [ssIB], whose empirical convergence proofs are arguably elementary (given or not the back-then state of the art) but have been instrumental in both the design and convergence proofs of boosting algorithms for numerous settings. --- Rebuttal Comment 4.1: Comment: > If this is the reason for the reviewer’s comments, then we deeply apologise. The word "this" is replacing a lot here, so I am not entirely sure what the authors mean. If they mean the fact that they do not present their contributions clearly, then, yes, that was the reason. Crucially, there is no need to present those in any predefined format, so long as it is clear. Also, yes, the statement in L46 is confusing, but only because of the overall quality of the writing. The separation between what the authors mean and that they actually mean is substantial. They also have a tendency to keep things implicit without any need, which makes things even worse. --- Rebuttal Comment 4.2: Title: (Mind the **Request** in this reply) Comment: > We [...] left the numerous implicit implications of our claims to “speak for themselves" 1. That explains a lot. 2. Although sometimes contributions do "speak for themselves", it is usually much better to stay humble and assume that does not need to be the case. 3. Making things very explicit here would be helpful. **Request**: Could the authors provide a very explicit bullet list of those "numerous implicit implications of their claims" put concisely? --- Rebuttal Comment 4.3: Comment: > We understand the reviewer would like us to claim that the problem is challenging I do not. Simple contributions are my favourite, even more so when I am reviewing. I re-read my review and could not locate where I said something that means what the authors seem to think I said. It appears to be yet another case of the authors reading something that was not written. What I implied was that if they established that their result was challenging to achieve (think something like "a centuries old conjecture") it would help us size their contribution. But there are other ways to achieve that, of course. --- Rebuttal 5: Title: [part 4/8] Points 2-4 before references Comment: > 2. Theorem 5.3 fails to provide a reasonably self-contained statement [...] What we propose to put in the introduction [RNT2] and at the end of Section 3 [RDML] is also aimed at clarifying this result. If the reviewer also means that it also takes a re-read of the manuscript to grasp some parameters in (20), we are happy to make this summary of what the key parameters are just before the theorem's statement, following a custom often seen in theory papers. > 3. The authors do not provide an exact statement for the optimisation problem [...] We conjecture that this comes from the lacking ML part at the end of Section 3 [RDML] > 4.1 On the theoretical side 0th order optimal weak to strong learners are known to exist: see [1] [R4.1] This reference is irrelevant for two reasons: [1] (which needs to be combined with [lrOW]) leads to *sample* optimal “AdAboost-ing”. Sample optimality is not our focus. Second, a 0th order optimisation algorithm takes a loss as input and has to work for *large sets of of losses*, not just 1 or a few. All references we put in Table A1 operate on large sets of losses, and the set is even wider for our algorithm, as the reviewer accurately noticed. AdaBoost does not qualify (AdaBoost optimises directly the exponential loss, indirectly the 0/1 loss). > 4.2 On a more practical side, in general, the performance of boosting methods does not really come from the minimisation of the associated loss function [R4.2] We wholeheartedly disagree: this statement overgeneralises from [2]’s Section 7.3 and puts it out of its *very specific context*: performance = 0/1 loss, associated loss = exponential loss (the “surrogate” loss). [2]’s Section 7.3’s “*raison d’être*” is the fact that there is more to just minimising the exponential loss that brings good performance on the 0/1 loss: margin maximisation (AdaBoost’s “dynamics”) is crucial to get there. In fact, in this very specific case, it is not even enough to get good performance: early stopping for AdaBoost is crucial to get statistical consistency [btAI]. Not early stopping would grant boosting statistical consistency *if* the loss is Lipschitz [tBW], which is not the case for the exponential loss. Hence, one can get rid of this early stopping design constraint by just *choosing a different loss* (but then, on an orthogonal performance measure, one faces slower rates [tBW,wvTS]). Could we remove this *additional* margin maximisation property for the associated loss minimisation to lead to good performance on the 0/1 loss ? It is indeed possible, and simple: just clip the exponential loss by replacing it by $F(z) = \exp(-\max\{z,u\})$ with $u>0$. It is trivial to show that any algorithm substantially beating the trivial max loss $\exp(-u)$ on average would in fact guarantee large margins — and thus good generalisation on the 0/1 loss via standard large margin classification results, e.g. [sfblBT]. Note however that minimisation would have to be carried out using a 0th order algorithm — or our approach ! > 4.3 I recognise that the authors explicitly dismiss matters of generalisation, but […] the authors may consider bringing up these points in some form. [R4.3] We dismissed generalisation for 2 reasons: (1) it was hard to discuss in the page limit but more importantly, (2) classical matters relevant to generalisation would amount to *restricting* the set of losses OR putting additional constraints on our algorithm, and we chose to stick to the most general setting, thus only focusing on the empirical boosting of any loss. As examples, to get statistical consistency, we would “just” have to restrict the set of losses to Lipschitz losses [tBW]; to get statistical consistency with a strongly convex loss, we would have to consider early stopping; to learn in an adversarial setting in the simplest way with our algorithm, we would probably *not* consider Lipschitz losses because of the then “easiness" of an adversary to play against the learner [cmnowMB]. In all these settings, there would be no fundamental modification to our algorithm. We could put a few lines using the additional page to discuss such matters informally. --- Rebuttal Comment 5.1: Title: Weakness 4 Comment: Recall that this point reads > **From the perspective of learning theory**, some of the motivation provided for $0^\text{th}$ order methods seems misplaced The other three points are sub-points of this global one. I am confused by the authors' arguments once more. I felt like they were replying to other text, which only resembles mine. ### Weakness 4.1 Alongside 4.3 (the conclusion of point 4), I mean that "**From the perspective of learning theory**, it seems worth mentioning that **there exists an optimal boosting algorithm that does not make use of gradients**". Again, 4.3 starts with > 4.3 I recognise that the authors explicitly dismiss matters of generalisation, but I still believe the remarks above are relevant in assessing the motivation for the work. I maintain that opinion and, honestly, I am surprised that the authors disagree. I am saying that **learning theorists**, which are (somehow) the target audience of the authors, could value being reminded of that fact and to see some discussion about it. Do the authors really disagree? I made it clear in my text that this "optimal $0^\text{th}$ algorithm" does not compete with the contribution of the authors (see 4.3 again), I am saying that "the authors may consider bringing this point in some form". ### Weakness 4.2 > We wholeheartedly disagree Again, I suspect the authors do not. That is because I do not see how > [...] this statement overgeneralises from [2]’s Section 7.3 [...] It is meant to be just one "very specific context" to illustrate that going from loss minimisation to generalisation performance is not given (aand the authors' rebuttal makes that case, here). Again, my point is within a "**From the perspective of learning theory**". Overall, I think it is worth mentioning that foreseeing important ways in which the target audience may be confused is part of an excellent presentation. --- Rebuttal 6: Title: [part 5/8] Point 5 before references Comment: > 5. I am confused by the authors' concept of what constitutes "traditional" boosting. To me, AdaBoost is the most prototypical and traditional boosting algorithm. [RTB] We would be happy to agree with the reviewer as this would simplify a lot our arguments, but this is unfortunately factually untrue, as e.g. recently debated in PNAS [nnTP]. AdaBoost [fsAD] was the (learning theory’s) first boosting algorithm in the sense of Kearns/Valiant’s weak/strong learning model. After AdaBoost, a sizeable current in *statistics* started its own boosting “phylum”: *gradient boosting*, with the works of Jerome Friedman (et al.), remarking that Adaboost could be framed as a *gradient* optimiser and then “generalising" gradient AdaBoosting to any *differentiable loss*. To grasp the importance of this statistical current, consider that Friedman’s founding paper [fGF] is now cited more than twice as much each year compared to Freund and Schapire’s founding paper [fsAD] (source: Google Scholar). Now, two key differences between the statistical current and learning theory’s AdaBoost are (i) statistics broadened the scope of boosting to any *differentiable* loss, **but** (ii) very few of such works have convergence proofs (even less so convergence *rates*, even less so in the weak-strong model). This is where our work finds its place and justification: we considerably broaden the applicable losses of the statistical approach to any loss while proposing explicit convergence rates in AdaBoost’s weak/strong learning setting in all cases. This also explains why we ultimately picked learning theory instead of optimisation as primary area: this field grounds the rich history of boosting and roots boosting’s “phylogenetic tree” [nnTP]; progress in optimisation has been orthogonal to this history. This is not a criticism: a lot of our references (all of Table A1) are on 0th order optimisation, which has been hugely productive on this topic while learning theory’s boosting has been mostly “deaf” to these advances. This is not surprising: the weak-strong learning setting does not explicitly call to the use of the loss’ derivatives to learn (unlike, of course, gradient descent). Given Friedman’s (et al.) take on boosting that “reduces it” to gradient descent, it was expectable and justified to try to alleviate the gradient dependence, which is our contribution. It came as a pleasant surprise that we had to make no functional assumptions to get there (unlike the state of the art 0th order optimisation). --- Rebuttal Comment 6.1: Title: Weakness 5 Comment: Again, I have a hard time following the argument here. The authors start by saying that they strongly disagree with me and then follow it with a discussion that, to me, seems to largely validate my points. > [...] this is unfortunately factually untrue [...] What does "this" mean here? Which of my sentences are factually untrue: "**I** am confused [...]" or "To **me**, [...]"? Since I doubt I was "recently debated in PNAS", I assume the authors are referring to "AdaBoost is the most prototypical and traditional boosting algorithm" (Sorry for the joke. I was trying to lighten the mood. Still, the point is minor but valid: Try to avoid putting **any** unnecessary cognitive load on the reader, however small.) The "prototypical" part is not so significant and not discussed, so I will only focus on the "traditional". It seemed to me that the authors were attacking a point that I never made. I know that AdaBoost is not the most "popular" boosting algorithm and I did not suggest it had better numbers on Google Scholar than any other method. I mean what **the authors said** in more detail, > AdaBoost [fsAD] was the (learning theory’s) first boosting algorithm in the sense of Kearns/Valiant’s weak/strong learning model, or something generic like what you can find on the Wikipage for "Boosting (machine learning)". --- Rebuttal 7: Title: [part 6/8] Point 6 before references Comment: > 6. The discussion around Assumption 5.4 is too loose […] [RA5.4] We understand the reviewer would like a discussion about the significance of the assumption. The discussion that grounds the assumption, in L165-L171, is meant to explain why such an assumption is in fact necessary in our setting. Our work is the first on boosting exploiting 0th order information about the loss and we have not seen before high-order v-derivatives with different offsets being used so we assume it is the first time $\rho$ indeed appears. To strengthen the discussion, we shall make a parallel with stochastic gradient descent (SGD) on strongly convex losses. We are happy to push some of what follows in the camera-ready, should our paper be accepted. When investigated on general strongly convex differentiable losses, the rate of SGD depends on some real that quantifies the “niceness” of the loss. A prominent such real is the *condition number* $\kappa$, the ratio between the largest and smallest eigenvalues of the Hessian (*a second order loss parameter*) of the loss [bsSO]. The convergence rate of SGD can be summarised as $\mathrm{Loss}(H_T) - \mathrm{Opt} \leq O (\kappa / T)$. This makes sense: the smaller $\kappa$, the more the loss resembles a 1D strongly convex curve rotated around a revolution axis and so any gradient step has to point to a large extent towards the global optimum (Slide 29 in [bsSO]). Hence, a rough “nice picture” for a loss to grant fast SGD convergence is that of a paraboloid of revolution. Consider our case: what is the nice picture when the loss can be arbitrary for boosting ? This picture becomes more complicated and there are reasons to believe that an additional "degree of complexity" is necessary: - Our weights depend on a quantity that generalises the first order derivative (Step 2.6) and pretty much like in ordinary boosting, large weights in absolute value point to examples for which substantial loss variation is possible via the weak learner. We write “loss variation” and not “loss decrease” because *labels can be flipped* (compared with AdaBoost for example, Step 2.1) so having a good weak learner is not sufficient anymore for good convergence. It makes sense thus that convergence would include an aggregator of “how nice are the weights”. Perhaps surprisingly, a relevant aggregator is trivial: it is a quadratic function of the average weight, the numerator of $\rho$. In the loss space, it is just the quadratic expectation of secants’ slopes (19) ! If this expectation is large in absolute value, there is leeway for better models. This picture is particularly clear in the convex case with the exponential loss for example, as in this case it just means that we are far from the minima of the loss. - But unfortunately in the general case of an arbitrary loss, it just gives a partial view of what is sufficient for good convergence ! Indeed, we could be optimising a loss function that jiggles a lot locally (consider Griewank’s function as an example). In this case, all slopes could be located around different basins, with different local minima nearby, and the information of a large $|\overline{W}_{1,t}|$ would then be not sufficient to ensure a better overall loss afterwards. Read: while in SGD first-order information is intuitively not enough for the best characterisation of convergence, just *ensuring* convergence in our case requires higher order v-derivative information (than just the secants’). We were pleased to realise that order-two v-derivative information is in fact sufficient, and this fits in the denominator of $\rho$. From a parallel with the Hessian in SGD, one can see that a small denominator yields smaller local “curvature” (i.e. less potential for local "jiggling") and with a large enough numerator, sufficient information is then collected in a *single real* to grant good, guaranteed boosting rate. We are sure the reviewer grasps at this point the subtleties of our approach that surely do not follow from any state of the art boosting analysis [RNT]. --- Rebuttal 8: Title: [part 7/8] After references, minor issues and suggestions -- $\gamma$ weak learner Comment: > 1. The authors, unfortunately, do not define it in Section 3 [...] [RWL1] Doing so would imply defining the notion of edge, normalized edge; it seems however that it would indeed find a legitimate place there after our proposal [RDML]. > 2. The definition is not self-contained [...] [RWL2] after [RDML] [RWL1], it would be straightforward to make it so, directly in Section 3 > 3. Honestly, I do not recognise that definition as "the traditional" one [...] To me, that would be a  $\gamma$-weak learner [...] whose average error is at most $1/2-\gamma$. [RWL3] This is the original definition [fsAD]. However it rapidly got even weaker and generalized: the paper [fhtAL] shows that we can in fact require that the average error be slightly *different* from 1/2 instead of slightly smaller (polarity-reversal argument after their equation (20): if the weak classifier $h$ does worse than 1/2 accuracy then $-h$ does better than 1/2 accuracy). This is however still for $f = -1,1$. The paper [ssIB] generalizes the measure to $f \in [-1,1]$: their $r_t$ (Corollary 1) is our $\tilde{\eta}_t$ and the discussion relating it to the error is right after (page 303, par. 1). From here many papers started to adopt the edge / margin notion directly in the weak learning assumption, see for example [mnwRC] for a definition that looks just like ours. > Assumption 5.5 might be less "global" than one could expect [...] [RWL4] Misunderstanding: it would not be a good idea to present the assumption the way proposed by the reviewer because then we risk losing sight of the fact that $h$ is normalized by *its* maximal empirical value. Our definition in fact coincides with [ssIB] (Corollary 1). The proposition made later in the bullet points to instead divide by a global term (we assume it is a term computed over all weak hypotheses) could in fact break the purpose of the weak learning framework: suppose that one $h$ has a huge $M^* = 10000$ while the others have $M=1$. Then satisfying the weak learning assumption by all those hypotheses imposes, instead of having our $|\tilde{\eta}| \geq \gamma$, to have $|\tilde{\eta}| \geq \gamma \cdot M^*/M = 10 000 \gamma$, imposing the weak classifiers to be in fact strong in disguise. At this point, we hope we have clarified the misunderstandings and our simple rewriting proposal would make it even easier to grasp the idea of the $\gamma$-weak learner. > Bullet points $|\textbf{w}|$ is indeed the componentwise absolute value (we will put it in Section 3) notation X is a bug -- it should have been noted $\textbf{wl}$. We shall fix it. $\mathcal{S}_t$ is the set of examples that our algorithm uses. We have defined it in the algorithm itself. See [RWL4] for the last bullet point. --- Rebuttal Comment 8.1: Comment: > This is the original definition [fsAD]. However it rapidly got even weaker and generalized [...] Again, it seems that the authors are agreeing with me. There is a traditional definition, then others modify it (obtaining non-traditional ones). --- Rebuttal Comment 8.2: Comment: > $\mathcal{S}_t$ is the set of examples that our algorithm uses. We have defined it in the algorithm itself. I am not sure to what the authors are replying. To my quick "(Why there?)"? If so, I remain unconvinced that it is the best choice to only define a variable in a comment. --- Rebuttal 9: Title: [part 8/8] general minor issues and suggestions Comment: ## General minor issues and suggestions We proceed through bullet order > In the technical summary above [...] The suggestion seems to suggest a continuous version of boosting in the vein of [awWW] for continuous EG. It is a very interesting question ! > (2 following bullets) [...] it is not even fully clear how novel the concepts introduced are. Section 4 introduces concepts related to v-derivatives, with 4 definitions, 4.1, 4.2, 4.3, 4.4, 4.6. It is absolutely explicit in the text that 4.1 comes from another work, 4.2 is new, 4.3 is new, 4.4 comes from another work and 4.6 is new. For all new concepts, we link them to their closest relative published. > I suspect you require less from the hypotheses returned by the weak learner [...] The reviewer is right but somehow this would risk "hiding" the fact that on training, an hypothesis needs to have finite values (or we just cannot compute the objective). Our formulation perhaps look less general but was formulated as is on purpose. > Consider stating more explicitly [...] We will do. > [Eq. (4)] Consider using [...] Excellent suggestion. > Number only the equations that are referenced in the text We will do. > Consider a version of Figure 1 Just to confirm, we see it as a Figure where dim1 would e.g. be the difference between $z$ and $z'$ (say $z$ is fixed) and dim2 would be the offset ? That would be easy and a good idea. > Avoid starting sentences with mathematical symbols Agreed, though L93 does not start like this (typo ?) > [122] the reference to [47, Appendix, Section 4] has a bit too much packed Agreed. Note that from [RNT2], part of the "unpacking" would directly start from the introduction > Adding hyperlinks to the steps of the algorithm Easy. We will do. > Algorithm 1 can be made significantly tidier.  We believe we can simplify Step 2.5 and replace the table in Step 2.3 by a more conventional algorithmic convention --- Rebuttal Comment 9.1: Comment: > It is absolutely explicit in the text that [...] It is not. The authors are relying on the convention that "providing no citations means it is original", right? That leaves originality implicit (definitely not "absolutely explicit"). Also, many authors do not follow this convention strictly (I find that a pitty). So, what I wrote holds, specially since it is a weaker statement than what the authors seem to notice: "not even fully clear" includes, for example, "just clear". Still, I was truly more confused than usual there, but unfortunately, I am not quite sure why. --- Rebuttal Comment 9.2: Comment: > Agreed, though L93 does not start like this (typo ?) This is a good example of the communication issue here. It is not a typo (I checked). What I wrote means what it says. --- Rebuttal 10: Title: Additional references used therein Comment: [awWW] E. Ahmid and M. Warmuth. Winnowing with gradient descent. COLT 2020 [bsSO] F. Bach and S. Sra. Stochastic optimization: Beyond stochastic gradients and convexity. Tutorial at NeurIPS 2016 [bmdgCW] A. Banerjee, S. Merugu, I. Dhillon and J. Ghosh. Clustering with Bergman divergences. JMLR 2005 [btAI] P. L. Bartlett and M. Traskin. AdaBoost is consistent. JMLR 2007 [cmnowMB] Cranko, Menon, Nock, Ong and Walder. Monge blunts Bayes: Hardness Results for Adversarial Training. ICML 2019. [fGF] J. Friedman. Greedy function approximation: a *gradient* boosting machine. Annals of Statistics 2001 (emphasis ours) [fhtAL] J. Friedman, T. Hastie and R. Tibshirani. Additive logistic regression: a statistical view of boosting. AoS 2000 [fsAD] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. JCSS 1997 (early version in EuroCOLT 1995). [fsGT] Y. Freund and R. Schapire. Game theory, on-line prediction and boosting. COLT 1996 [ksNO] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. ICML 1998. [lrOW] K. G. Larsen and M. Ritzert. Optimal weak to strong learning. NeurIPS 2022 [mnwRC] Y. Mansour, R. Nock and R.C. Williamson. Random classification noise does not defeat all convex potential boosters irrespective of model choice. ICML 2023 [nnTP] R. Nock and F. Nielsen. The Phylogenetic Tree of Boosting has a Bushy Carriage but a Single Trunk. PNAS 2020. [nwLO] R. Nock and R.C. Williamson. Lossless or quantised boosting with integer arithmetic. ICML 2019.. [sfblBT] R. Schapire, Y. Freund, P. Bartlett and W.S. Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. ICML 1997. [ssIB] R. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. MLJ 1999. [tAP] M. Telgarsky. A primal-dual convergence analysis of boosting. JMLR 2012 [tBW] M. Telgarsky. Boosting with the logistic loss is consistent. COLT 2013 [wvTS] M.K. Warmuth and S.V.N. Vishwanathan, Survey of boosting from an optimization perspective. Tutorial at ICML 2009
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Single-Step, Sharpness-Aware Minimization is All You Need to Achieve Efficient and Accurate Sparse Training
Accept (poster)
Summary: This paper presents a sparse training method, $S^2$-SAM, which applies sharpness-aware minimization to sparse training. The authors demonstrate that sparsity during training leads to a sharper (to use the authors' words, "more chaotic") loss surface, something that can be mitigated by a variant of sharpness-aware training. Additionally, the authors avoid the extra gradient computation of SAM by reusing the gradient from the previous step as a proxy. The authors demonstrate theoretically that the error from $S^2$-SAM is bounded, and empirically by testing their algorithm with a number of sparse training methods, as well as with dense training. Strengths: The method is practical and efficient, and achieves very good experimental results. I appreciated the clock-time comparison in addition to the usual FLOPs. Overall, the paper is quite convincing, and I personally plan to give the method a try in my own work. Weaknesses: The main experimental weakness of the paper is restricting the method to sparsities no higher than 90 on ImageNet and not doing a full ablation of the components of $S^2$-SAM. It would have been interesting to see the high-sparsity results, as it seems that $S^2$-SAM would have been quite effective there (and if not, why not)? Likewise, an ablation would have been helpful to understand how much we lose by using the previous-gradient version of SAM (the authors do try $S^2$-SAM on dense models, but compare to regular SGD, not regular SAM). Comparing with the original SAM paper, the numbers actually seem pretty promising on dense models, but this is not included in this paper. I found the sharpness figures confusing. The overall intuition is clear, but the actual process by which the figures were obtained is not. Technical Quality: 3 Clarity: 3 Questions for Authors: How much test accuracy is lost by using the previous-gradient estimate of SAM as compared to the normal SAM? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations were adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer UgUE, Thank you for your review and thoughtful suggestions on our paper. Regarding the questions you raised, we believe they are important points that merit further attention. **W1: No result on sparsities higher than 90 on ImageNet; how much we lose by using S$^2$SAM compared to regular SAM; no result compared with original SAM on dense model.** Thank you for your great suggestions. We select our experiment settings of sparsity levels based on existing sparse training papers for easy comparison. To demonstrate S$^2$-SAM is more effective on higher sparsity, we add 95\% sparsity experiments with ResNet-50 on ImageNet. Due to limited time, we adopt RigL and MEST algorithm with S$^2$-SAM. The results are shown in the table below. From the results, we can see that S$^2$-SAM consistently improves the accuracy of sparse training algorithms, and the improvements are similar or higher than 80\% and 90\% sparsity results. Therefore, we can say that S$^2$-SAM is quite effective with high sparsity on ImageNet. We will conduct more experiments with 95\% sparsity on ImageNet and add the results in the revised paper. | Method | Acurracy at 95\% sparsity | | :--- | :---: | | RigL | 69.02 | | RigL + S$^2$-SAM | **69.83** | | MEST(EM) | 69.95 | | MEST(EM) + S$^2$-SAM | **70.81** | To compare with original SAM, we have conducted experiments with sparse training algorithms in Table 4. The reason is that the focus of our paper is on the generalization ability and efficiency of sparse training, instead of dense training. As shown in Table 4, S$^2$-SAM achieves negligible accuracy drop compared to original SAM, while original SAM doubles the computational costs (i.e., 100\% more computations, half training speed). In response to your comment about dense training, we conduct ***additional experiments with SAM on dense training***, and the results are shown in the table below. We can see that dense training with S$^2$-SAM consistently demonstrates accuracy improvements. Compared to original SAM, S$^2$-SAM only experiences negligible accuracy drop, which are completely normal since our method is based on the approximation of sharpness perturbation. We want to stress again that our paper's focus is on a ***practical scenario*** (i.e., sparse training), which means we must consider both generalization ability and the efficiency of the training. Therefore, such small accuracy degradation is totally acceptable since our method has ***zero extra cost*** compared to original SAM which doubles the computation cost. We will integrate those results in our Table 6 and discuss more on this matter in the final version of our paper. | Method | Original | S$^2$-SAM | SAM | | :--- | :---: | :---: | :---: | | **CIFAR-10** | | | | | ResNet-32 | 94.58 | 94.99 | 95.32 | | MobileNet-V2 | 94.13 | 94.55 | 94.77 | | VGG-19 | 94.21 | 94.48 | 94.71 | |**ImageNet-1K** | | | | | EfficientNet-B0 | 76.54 | 77.10 | 77.38 | | ResNet-34 | 74.09 | 74.58 | 74.77 | | ResNet-50 | 76.90 | 77.32 | 77.58 | **W2: I found the sharpness figures confusing. The overall intuition is clear, but the actual process by which the figures were obtained is not.** Thank you for your comments and we are sorry for the confusion. We use the method in citation [11] in our paper to obtain the loss surface visualizations in Figures 1 and 3. The method in citation [11] is a ***widely used*** technique to show the loss surface visualization [R1][R2][R3][R4], employing a random direction method to approximate the 2D projected space of the loss surface. We mention citation [11] in line 44 of our paper to illustrate that higher sparsity indicates a narrower structure, which suggests more chaotic behavior during training, thus degrading accuracy. We will ***further*** explain the method of loss surface visualization in our revised paper, and we will ***cite [11] again in the captions*** of Figures 1 and 3 for clarity. [R1] Chen, Xiangning, et al. "When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations", ICLR 2022 [R2] Zhang, Xingxuan, et al. "Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization", CVPR 2023 [R3] Du, Jiawei, et al. "Efficient Sharpness-aware Minimization for Improved Training of Neural Networks", ICLR 2022 [R4] Mi, Peng, et al. "Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach", NeurIPS 2022 **Q1: How much test accuracy is lost by using the previous-gradient estimate of SAM as compared to the normal SAM?** Thanks for your question. We compared S$^2$-SAM with SAM in Table 4 for sparse training. We can see from Table 4 that S$^2$-SAM sacrifices only marginal accuracy compared to SAM, but SAM requires about twice the computational cost of S$^2$-SAM or original training. And we also add a more detailed table of comparing the accuracy of SAM and S$^2$-SAM on dense model in the response of W1 (please see the table above) and we will add that to our paper later. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you to the authors for the rebuttal and additional experiments. I remain convinced that this paper will be a useful addition to our understanding of SAM and sparse training, and I keep my (positive) score. --- Reply to Comment 1.1.1: Title: Thank you for your support Comment: Dear Reviewer UgUE, We sincerely appreciate the time and effort you’ve invested in providing thoughtful and constructive feedback on our submission. We're delighted that you view our work as a valuable contribution to SAM and sparse training. We hope our responses have fully addressed your concerns. If you have any further questions, we would be glad to follow up. Best regards, The Authors
Summary: The authors of this paper posit that sparse training is difficult due to a chaotic loss landscape as opposed to standard training of a dense network. In order to address this problem, they propose to perform sparse training with a Sharpness Aware Minimization approach. In order to do so efficiently, they leverage the gradient of the previous iteration to identify the SAM perturbation (S^2-SAM) instead of performing a second forward-backward step like SAM. They show that this approximation of SAM also converges to an optimal solution theoretically. Extensive empirical evaluations show that the proposed method is effective for sparse training and can also improve robustness of the sparse networks. Strengths: 1. The authors propose a simple method which can be plugged into training any sparse network to improve training via sharpness minimization, without the additional cost of an extra forward pass. 2. Theoretical as well as extensive empirical evaluations are provided to showcase the effectiveness of S^2-SAM. Weaknesses: 1. The paper starts from the premise that sparse training is difficult in comparison to standard dense training. However, sparse training does not necessarily generalize poorly, in fact some methods are able to train sparse networks that outperform their dense counterparts as shown by Jin et. al. [1] In fact, Jin et. al. claim that pruning can behave as a regularizer, enabling better generalization. Similarly, Renda et. al. [2] have shown that Learning Rate Rewinding (LRR), a sparsifying algorithm similar to LTs, can find sparse networks that outperform their dense counterparts. Do these findings suggest that sparse networks can also be found without performing S^2-SAM. Does LRR also find better loss landscapes for sparse networks and if so, then is S^2-SAM necessary? It would also be beneficial to compare with LRR. 2. Does performing SAM instead of the proposed S^2-SAM have a stronger effect on sparse networks i.e. do sparse networks generalize better when trained with SAM instead of S^2-SAM, no matter the training cost. It would be nice to have this comparison to shed light on the robustness of the proposed training method. Technical Quality: 3 Clarity: 3 Questions for Authors: Does the proposed method largely behaves as a regularizer and hence improves generalization of sparse networks? And if so, are there other such regularization methods that could potentially offer similar benefits other than the proposed S^2-SAM. For example, it was suggested by Paul et. al. [3] that as long as the loss landscape is linearly connected for LTs, sparse networks generalize well. Hence, would S2-SAM allow for faster training in this case or potentially pruning a larger fraction of parameters in one iteration in comparison to Iterative Magnitude Pruning? [1] Jin, Tian, et al. "Pruning’s effect on generalization through the lens of training and regularization." Advances in Neural Information Processing Systems 35 (2022). [2] Renda, Alex, Jonathan Frankle, and Michael Carbin. "Comparing Rewinding and Fine-tuning in Neural Network Pruning." International Conference on Learning Representations. [3] Paul, Mansheej, et al. "Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?." The Eleventh International Conference on Learning Representations. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Z2wG, **W1: Some methods are able to train sparse networks that outperform their dense counterparts, do these findings suggest that sparse networks can also be found without performing S$^2$SAM? Is S$^2$SAM necessary? It would be beneficial to compare with LRR.** Thank you for your constructive comments. We totally agree with you that there are works which can improve the generalization of sparse training. In fact, we will also ***cite those papers and acknowledge their contributions*** to training sparse networks. What we want to stress here and in our paper is that, no matter what sparse training algorithms are used (e.g., LT-based, static sparse training, dynamic sparse training, etc.), the proposed S$^2$-SAM is ***an efficient way to boost*** the performance of such algorithms by improving accuracy with ***no*** extra cost. S$^2$-SAM is not a standalone algorithm, but a ***universal component*** that serves to help any sparse training algorithm achieve better performance. Therefore, no matter how good a sparse training algorithm is, our method will have its ***necessity***. According to Table 1 and Table 2, we already prove that S$^2$-SAM can work well with a variety of sparse training methods. We also conduct LRR and LRR+S$^2$-SAM to further demonstrate its universality. We use ResNet-32 on CIFAR-10 and original training settings, and we report sparsity and accuracy of the IMP process of LRR. From the table below, we can see that LRR+S$^2$-SAM achieves better results compared to LRR, which further proves S$^2$-SAM is a universal applicable method to different kinds of sparse training methods. | Method | 80\% sparsity | 90\% sparsity | 95\% sparsity | | :--- | :---: | :---: | :---: | | LRR | 94.68 | 94.05 | 93.82 | | LRR + S$^2$-SAM | 94.87 | 94.39 | 94.22 | **W2: Does performing SAM instead of the proposed S$^2$-SAM have a stronger effect on sparse networks, no matter the training cost?** Thank you for your question. In fact, we have evaluated different sparse training methods with original SAM and the proposed S$^2$-SAM in Table 4 in our paper. It is true that original SAM achieves slightly better accuracy compared to S$^2$-SAM, but such small accuracy loss is negligible. More importantly, our paper is built upon a very ***practical*** research domain, which is the training efficiency, and our method offers a practical solution for implementing sparse training in ***resource-limited environments***. According to Table 4, traditional SAM doubles the computational costs (i.e., 100\% more computations, half training speed), while our method S$^2$-SAM maintains the benefits of accuracy enhancement while achieving same training speed as original training. Therefore, S$^2$-SAM extends the Pareto boundary by achieving superior outcomes without necessitating compromises. **Q1: Does the proposed method largely behaves as a regularizer and hence improves generalization of sparse networks? And if so, are there other such regularization methods that could potentially offer similar benefits other than the proposed S$^2$-SAM. Would S$^2$-SAM allow for faster training in this case or potentially pruning a larger fraction of parameters in one iteration in comparison to IMP?** Thank you for your insightful comments. Regularization generally refers to techniques that improves generalization ability of neural networks by preventing overfitting, such as L1 or L2 regularization that add penalty terms to the loss function to constrain the model's complexity. Therefore, we think SAM-like approaches are also behaving as a regularizer because they are improving the generalization of deep neural networks for various settings. The key idea of SAM-like approaches is to make the model parameters robust to small perturbations in the parameter space, effectively seeking flatter minima. This is achieved by adjusting the parameters to minimize the worst-case loss within a neighborhood around the current parameter values, thus it can be characterized as regularization method. For similar approaches that improve generalization using sharpness of the loss surface, we have conducted a thorough survey and the related literature are cited in our related work section (line 284 - 297). For example, ESAM [13], LookSAM [14] and SAF [15] are all using sharpness information to improve generalization, as well as using specific algorithms to reduce cost. Different from our proposed S$^2$-SAM that has ***zero extra computation cost***, those methods all need extra computation to find a valid perturbation, and they do not target on sparse training. Applying S$^2$-SAM in Iterative Magnitude Pruning (IMP) with larger pruning fraction of parameter is an intriguing idea. Based on our experiments, S$^2$-SAM enhances the generalization ability of a variety of sparse training methods, including LT (please see Table 1 and Table 2 in our paper). Therefore, S$^2$-SAM may also be able to use fewer LT iterations to achieve good accuracy on similar sparsity. The original LT prunes 20\% remaining parameters at each iteration. We perform additional experiments, which prune 30\%, 40\% parameters at each iteration. The results show when a larger fraction of parameters is pruned, the LT winning ticket accuracy slightly drops. With the proposed S$^2$-SAM, the winning ticket accuracy ***improves*** as expected. We also notice that when the pruning fraction is larger, the effect of S$^2$-SAM is ***more significant***, which proves that S$^2$-SAM successfully solves the difficulty in sparse training. We will conduct more rigorous experiments for future exploration. | Method | Prune Ratio / Iteration | Iteration | 90\% Sparsity | | :--- | :---: | :---: | :---: | | LT | 20\% | 11 | 92.31 | | LT + S$^2$-SAM | 20\% | 11 | 92.58 | | LT | 30\% | 6 | 91.71 | | LT + S$^2$-SAM | 30\% | 6 | 92.32 | | LT | 40\% | 4 | 90.73 | | LT + S$^2$-SAM | 40\% | 4 | 91.44 | --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for a detailed explanation and providing additional experiments to highlight the effectiveness of S^2-SAM. I do believe that their proposed method will be useful for sparse training. I have increased my score. --- Reply to Comment 1.1.1: Title: Thank you for your support Comment: Dear Reviewer Z2wG, We wanted to express our sincere gratitude for raising our score. Thank you for your support and constructive comments! We will include all the updates in our revision. Best regards, The Authors
Summary: This article introduces S2-SAM (Single-step Sharpness-Aware Minimization), an innovative sharpness-aware optimization method tailored specifically for sparse training with zero extra computational cost. Strengths: 1. The method improves the generalization ability of sparse neural networks, which is a significant challenge in sparse training. 2. Figures of loss surface significantly demonstrate the effectiveness of the proposed methods. 3. S2-SAM provides a general improvement in all the sparse training methods. Weaknesses: 1. S2-SAM seems to be a zero-extra-cost variety of SAM that is implemented in sparse cases. However, when the sparsity is high, the extra cost of SAM can be ignored. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the topological initialization of Table 2? Why some of them are non-uniform and some of them are uniform? 2. The datasets in the article are all computer vision domains. Have you done some experiments on NLP domain? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper doesn't include a limitation section Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer uQD3, We appreciate your review of our paper. The issues you have raised are very important and deserve further discussion. Below are our responses to your comments. **W1: S2-SAM seems to be a zero-extra-cost variety of SAM that is implemented in sparse cases. However, when the sparsity is high, the extra cost of SAM can be ignored.** Thank you for your comment. It is true that as sparsity increases, the computation cost decreases for both training and SAM computation. However, we must also consider this scenario in a more ***practical*** perspective, which is the implementation of DNN training on the ***resource-limited devices*** (i.e., the most common case for sparse training). Hardware design usually needs to consider compact footprint to save resources. No matter how much computation saved due to high sparsity, traditional SAM will always double the remaining, leading to at least 100\% increase of the hardware footprint. There remains a significant difference in computational cost between traditional SAM and S$^2$-SAM. Ignoring the additional computation introduced by traditional SAM undermines the purpose of sparse training, which is the ***applicability*** and ***efficiency***. Our method maintains the benefits of SAM while minimizing computational overhead, offering a practical solution for implementing SAM in resource-limited environments. As shown in Table 4, S$^2$-SAM sacrifices only marginal accuracy compared to SAM, yet SAM requires about ***twice*** the computational cost of S$^2$-SAM or original training. In summary, S$^2$-SAM provides significant advantages in efficiency and applicability across various computational settings, even with high sparsity. **Q1: What is the topological initialization of Table 2? Why some of them are non-uniform and some of them are uniform?** The topological initialization refers to the method of distributing non-zero weights across the network layers, which is presented in two ways in Table 2: uniform and non-uniform. Uniform: The sparsity $s^l$ of each individual layer is equal to the total sparsity $S$ of the network. Non-uniform (ERK): The number of parameters in the sparse convolutional layers is scaled proportionally to the width and height of the $l^{th}$ convolutional kernel. These two sparsity distributions are widely used in current research [R1][R2][R3]. The reason we use these two sparsity distributions in Table 2 is to provide a fair and comprehensive comparison with other methods. [R1] Evci U, Gale T, Menick J, et al. "Rigging the lottery: Making all tickets winners", ICML 2020 [R2] Liu, Shiwei, et al. "Do we actually need dense over-parameterization?", ICML 2021 [R3] Yuan, Geng, et al. "Mest: Accurate and fast memory-economic sparse training framework on the edge", NeurIPS 2021 **Q2: The datasets in the article are all computer vision domains. Have you done some experiments on NLP domain?** Yes, we evaluate our methods on a translation task using the Transformer model [R4] on WMT-14 En-De dataset, reporting the best SacreBLEU scores on the validation dataset in the table below. We applied our method with uniform sparsity levels of 80\% and 90\% across all layers. Compared to MEST(EM), our method demonstrates improved performance in both dense and high-sparsity scenarios. We will also integrate more NLP results in our revised paper. | Method | SacreBLEU | | :--- | :---: | | Dense | 27.6 | | Dense + S$^2$-SAM | 27.9 | | Method | SacreBLEU | SacreBLEU | | :--- | :---: | :---: | | | 80\% sparsity | 90\% sparsity | | MEST(EM) | 27.1 | 26.4 | | MEST(EM) + S$^2$-SAM | 27.5 | 27.2 | [R4] Vaswani A, et al. "Attention is all you need", NeurIPS 2017
Summary: This paper studies the challenges of training sparse neural networks directly and identifies one of the contributing factors, i.e., the chaotic loss surface. Consequently, it proposes a new method, i.e., Single-step Sharpness-Aware Minimization (S2-SAM), tailored specially to train sparse networks. S2-SAM is based on SAM for dense neural networks training with the main difference that it uses just one gradient computation (thus, more efficient), while SAM uses two gradient computations. Experimental results show unanimously that S2-SAM improves the performance of all sparse training methods studied. Strengths: * Original paper idea with well-designed execution * The paper is easy to read and follow * Novel proposed method with theoretical flavour * Well designed empirical validation * Impressive boost in performance for all sparse training methods studied when the proposed method is applied to them * The paper is significant for the sparse training community and has the potential of changing how sparse training methods are designed nowadays Weaknesses: * I don’t see major weak points, except that it is not clear when the source code will be made available for easy reproducibility Technical Quality: 3 Clarity: 3 Questions for Authors: Q1) It seems that S2-SAM works with sparse networks, but not too well with dense networks. As sparser the network is as more impactful S2-SAM is on the overall performance (lines 252-253 and the majority of the results). Can you prepare a systematic study (experiment) to quantify and illustrate better this behavior (e.g., by varying the sparsity level from 0 to 100% in small steps, or alternatives…)? Q2) Could you present in an Appendix how the loss surface visualisations from Figures 1 and 3 have been computed? I see that you cite [6], but overall this seems to be an approximation method to visualize the loss surface of a very high dimensional space. If so, this shall be properly acknowledged to avoid bringing in readers’ mind inaccurate ideas. Q3) It seems that on ImageNet, which is a much more challenging dataset, your proposed method together with MEST or RigL outperforms the dense baseline (without S2-SAM). Can you present the results on this latter case also? Do you have any idea why this behaviour is different on CIFAR 10/100? Probably, this deserves a longer qualitative discussion. Q4) The whole empirical validation has been performed on convolutional neural networks (not on all neural network architectures), and the “chaotic” loss surface is probably just one (not the only one) of the reasons which can help understand the training behaviors of sparse neural networks. Can you please identify better in the paper the boundaries of your work to avoid inaccurate conclusions in the readers’ mind? Q5) (minor) To expand the broadness of the experimental results. In Table 1, would it be possible to add also the 80% sparsity level?In Table 2, would it be possible to add also a SET 5x? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer ZjXc, We sincerely appreciate your thoughtful comments. All source code will be released after the paper is accepted. **Q1: Systematic study to quantify and illustrate the proposed method?** Thank you for the question. We want to stress that our paper is focusing on providing a universal solution for training a sparse neural network with different algorithms. Compared to dense training, our contribution has more ***practical*** significance. Meanwhile, our method works fine on dense training (negligible accuracy loss against SAM) with a significant speed improvement compared to SAM. The table below shows the comparison between MEST with and without our proposed method, S$^2$-SAM, in terms of accuracy on ResNet-32 under different sparsity levels from 0% to 98%. | Sparsity | Dense | 20\% | 40\% | 60\% | 80\% | 90\% | 95\% | 98\% | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | MEST(EM) | 94.58 | 94.35 | 93.92 | 93.58 | 93.23 | 92.56 | 91.15 | 89.22 | |MEST(EM) + S$^2$-SAM| 94.99 | 94.67 | 94.38 | 94.11 | 93.88 | 93.43 | 91.58 | 91.22 | | $\Delta$ Accuracy | 0.41 | 0.32 | 0.46 | 0.53 | 0.65 | 0.87 | 0.43 | 2.00 | As shown in the table, as sparsity increases, the accuracy difference between MEST with and without S$^2$-SAM becomes more significant. We will include this table in the revised version of our paper. **Q2: Loss surface visualizations method and citation [6].** Thank you for your question and sorry for the confusion we made. In our paper, we used the method cited as [11] to generate the loss surface visualizations in Figures 1 and 3, and we use the metric in [6] to compute the sharpness of the surface. We will make it clear in our revised paper. The method in citation [11] is a ***widely used*** technique for showing loss surface visualizations [R1-R4], employing a random direction approach to approximate the 2D projected space of the loss surface (i.e., 2D contour with loss values which can be converted to 3D image). We will further explain the method of loss surface visualization in our revised paper, and we will cite [11] again in the captions of Figures 1 and 3 for clarity. From citation [6] in our paper, we identify the $Ra$ value in the captions of Figures 1 and 3 as the mean absolute deviation of the z-axis value (loss value) to evaluate the sharpness of the surface. A smaller $Ra$ indicates a smoother loss surface, suggesting improved generalization ability. We will explain with more details of [6] and [11] in the Appendix in the revised version of our paper. [R1] Chen, Xiangning, et al."When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations", ICLR 2022 [R2] Zhang, Xingxuan, et al. "Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization", CVPR 2023 [R3] Du, Jiawei, et al. "Efficient Sharpness-aware Minimization for Improved Training of Neural Networks", ICLR 2022 [R4] Mi, Peng, et al. "Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach", NeurIPS 2022 **Q3: Why some sparse training accuracy higher than dense baseline on ImageNet but not on CIFAR?** The reason for this behavior is that the CIFAR datasets are relatively small, which can make DNNs prone to overfitting and makes it harder to improve accuracy significantly. In contrast, the ImageNet dataset is significantly larger, providing a more challenging and diverse set of images that help in training more generalized models that conquer overfitting. Under such circumstances, extending training time (***MEST 1.7$\times$*** or ***RigL 5$\times$*** in Table 2) will be very helpful to improve accuracy as the original accuracy (MEST or RigL without S$^2$-SAM) is already very close to the dense training baseline. We will provide this discussion in the revised paper. **Q4: Identify in the paper the boundaries of your work regarding tasks.** Thank you for the good suggestion. Our paper mainly explores the CNN structures and their loss surfaces to understand the training behaviors of sparse neural networks. We will emphasize this focus in the Introduction part in the revised version of the paper to ensure clarity and avoid any inaccurate conclusions. We also conduct additional experiments on the NLP task due to reviewer's comment. We evaluate our methods on a translation task using the Transformer model [R5] on WMT-14 En-De dataset, reporting the best SacreBLEU scores on the validation dataset in the table below. We applied our method with uniform sparsity levels of 80\% and 90\% across all layers. Compared to MEST(EM), our method demonstrates improved performance in both dense and high-sparsity scenarios. We will integrate more NLP results in our revised paper. | Method | SacreBLEU | | :--- | :---: | | Dense | 27.6 | | Dense + S$^2$-SAM | 27.9 | | Method | SacreBLEU | SacreBLEU | | :--- | :---: | :---: | | | 80\% sparsity | 90\% sparsity | | MEST(EM) | 27.1 | 26.4 | | MEST(EM) + S$^2$-SAM | 27.5 | 27.2 | [R5] Vaswani A, et al. "Attention is all you need", NeurIPS 2017 **Q5: Add 80% sparsity in Table 1 and add SET 5x in Table 2.** Thank you for your thoughtful suggestions. We report 90\%, 95\%, and 98\% sparsity in our paper at first place because most current literature primarily utilizes them for comparison. Due to time and resource limits, we report the accuracy at 80\% sparsity on CIFAR-10 in the response of Q1 for Table 1, and we will include that and all the other accuracy result in 80\% sparsity in the revised version of our paper to guarantee a more comprehensive comparison. For Table 2, we plan to include the results from SET 5$\times$ in a future revision. And below is the table for the comparison of SET 5$\times$ with and without our method on ImageNet dataset at 80% and 90% sparsity respectively. | Method | 80\% sparsity | 90\% sparsity | | :--- | :---: | :---: | | SET $_{5 \times}$ | 74.60 | 72.43 | | SET $_{5 \times}$ + S$^2$-SAM | 75.43 | 73.16 | --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgement Comment: Dear authors, Thank you for considering my comments and for the well prepared rebuttal. I will keep my original score (accept). Best wishes,
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Generalization bounds for mixing processes via delayed online-to-PAC conversions
Reject
Summary: The authors establish generalization error bounds for non-iid data based on the online-to-PAC conversion. In particular, the authors extend the online-to-PAC conversion techinique to non-iid settings utilizing the online learning with delayed feedback. In the paper, the authors illustrate 1. a method of non-iid online-to-PAC conversion, 2. a method of converting online learning algorithms to their delayed counterparts, 3. the resulting non-iid generalization bounds combining 1. and 2., and 4. an extension of 1. to dynamic hypothesis learning. Strengths: - The well-organized, well-motivated paper on generalization error analysis and online learning. - The claimed results are novel and help us better understand the generalization in dynamic environments. Weaknesses: - Notable logical gap: Lemma 3 only gives a regret bound **independent** of $P^*$, but it seems Corollaries 3 and 4 need $P^*$-dependent regret bound. I believe this is fixable, but still, need some fix. - More discussion on related work: Are there any results previously not known, but can be proved with the proposed method? Technical Quality: 3 Clarity: 4 Questions for Authors: - Can you show instantiations of Theorem 4 using EWA and FTRL (as in Corollaries 3 and 4)? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Limitations are not explicitly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1) Notable logical gap: Lemma 3 only gives a regret bound independent of $P^*$, but it seems Corollaries 3 and 4 need $P^*$-dependent regret bound. I believe this is fixable, but still, need some fix. Thanks for pointing this out! This typo is indeed a bit confusing: the right hand side is of course also allowed to depend on $P^*$, and it indeed should depend on the comparator for the result to be applicable. We will fix this in the final version. Q2) More discussion on related work: Are there any results previously not known, but can be proved with the proposed method? The explicit bounds that we propose in corollary 3 and 4 are all novel, and our mixing assumptions are generally weaker than what is usually considered in the literature. We will emphasize this more effectively in the final version. Q3) Can you show instantiations of Theorem 4 using EWA and FTRL (as in Corollaries 3 and 4)? We omitted these instantiations due to space limitations, otherwise they can easily be derived by analogy with the two corollaries you mention. We will expand on this, either by adding explicit examples or by mentioning them more explicitly. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I confirm my concerns/questions are all addressed adequately.
Summary: This paper studies the generalization error of statistical learning in a non-i.i.d setting, where the training data distribution could have temporal dependency. They develop a framework that reduces the generalization error in this case into the regret of an online learning problem with delayed feedback. Then, they present a series of instantiations of their results with different online learning algorithms and assumptions on the data generation process. Strengths: 1. The propose framework is elegant and wildly applicable to many real-world data generating process. 2. The paper is easy to follow, and the setting is well presented. The introductory section for the reduction in the i.i.d case is very helpful in understanding the context. Weaknesses: 1. The proposed framework seems to be a straightforward extension of that in the i.i.d setting. Technical novelty of this work seems limited. 2. The instantiation of the framework given in this paper is still very high level and abstract (for example, the algorithm considered is the general follow the leader algorithm). It would be beneficial to have some specific instantiations and show that if the obtained results are comparable to the existing ones, similar to what has been done in Lugosi and Neu (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. typo in Line 43: double "propose" 2. Assuming that the data is $\beta$-mixing, how is your results compare to previous work? Does your result improve over existing ones? 3. Is the framework applicable when the online learning algorithm is under the online mirror descent framework? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Nothing necessary stands out. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Regarding the technical novelty of our method, please see our general response. Regarding instantiating our results for some specific settings: We omitted these instantiations due to space limitations, otherwise they can easily be derived by analogy with results of Lugosi and Neu (2023). We will expand on this, either by adding explicit examples or at least by mentioning them more explicitly. Q1) Assuming that the data is 𝛽-mixing, how is your results compare to previous work? Does your result improve over existing ones? Our assumption is weaker when the loss is bounded (hence implied by the beta-mixing condition in the papers you mention). For unbounded losses, neither of the assumptions is stronger than the other (i.e., one may have beta mixing without our assumption being satisfied in some cases, but in other cases our assumption may be satisfied without beta mixing). The results provided in our work improve over those of Mohri (2008): those involve a Rademacher complexity term that is often looser in practice than what one can derive from PAC-Bayesian bounds. Q2) Is the framework applicable when the online learning algorithm is under the online mirror descent framework? Our framework is fully general and can make use of any online learning algorithm with bounded regret. In particular, one can use OMD-style algorithms through the reduction stated in Section 4.1, and obtain results that are essentially identical to the results stated in Section 4.4 for FTRL-style methods. (The only difference would be replacing $h(P^*) - h(P_1)$ by the Bregman divergence $B_h(P^*\| P_1)$, which are equal when $P_1$ is chosen as the minimizer of $h$.) We refer to Sections 6 and 7 in “A Modern Introduction to Online Learning” by Orabona (2019) for a detailed discussion of both families of methods.
Summary: This paper extends the Lugosi-Neu(2023) framework for upper bounding the generalization error of statistical learning algorithms to the non-i.i.d. setting, by considering that the training samples are drawn from a suitably mixing stochastic process. They show that the existence of a delayed online learner with bounded regret in the Online-to-Batch game of Lugosi-Neu(2023) against an offline learner implies that the offline learner has low generalization error even when trained on data drawn from a mixing stochastic process. The authors also investigate settings such as FTRL and MWU under this model. Strengths: The paper addresses an important question - How to bound generalization error of statistical learning algorithms trained on non-i.i.d. data in a manner which is independent of the complexity of the statistical learner? The paper does a fine job at establishing the notion of such bounds and some conditions under which such bounds are recoverable. Weaknesses: The techniques seem to be largely an amalgamation of several papers which have refined the "blocking technique" in various settings, and the key observation that the introduction of delay in online games lead to the online cost being a sum of martingale difference sequences, which essentially allows them to use proof techniques of Lugosi-Neu(2003). The delayed online learning setting is new to me, and I am not sure how to evaluate its significance versus the standard online learning setting. In fact, it seems like getting similar bounds w.r.t. the standard online setting would involve significant more technical novelty, compared to the current setting. The stochastic process also seems to be quite well-behaved in comparison to previous works in the literature such as Mohri and Rostamizadeh (2011), who give generalization bounds (in the pure offline setting) under stochastic processes with weaker notions of convergence. The authors mention that the results hold for a specific class of bounded loss functions, but I could not find specific details regarding this point afterwards in the paper. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Can the authors comment on the delayed learning setup vs the normal online setup of Lugosi-Neu (2003)? Given that this is a central construct essential to the proof idea, I strongly believe that the discussion section should some attention to this question. 2. Is it possible to get high probability bounds for the generalization error when the offline learner is trained on $\beta$-mixing processes as defined in Yu (1994), Meir (2000), Mohri and Rostamizadeh (2008), etc.? 3. Is it possible to get similar results for a more general family of loss functions (for example Lipschitz loss functions)? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The discussions of limitations in the current work is limited, and lacks discussion as to why certain choices were made (or overlooked). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1) Can the authors comment on the delayed learning setup vs the normal online setup of Lugosi-Neu (2003)? See our general response to all reviewers regarding the necessity / usefulness of delays in this setting. The setting of online learning with delays is well-studied, and the results we borrow from Weinberger & Ordentlich (2002) are minimax optimal. We will expand our discussion of this setting in the final version. Q2) Is it possible to get high probability bounds for the generalization error when the offline learner is trained on 𝛽-mixing processes as defined in Yu (1994), Meir (2000), Mohri and Rostamizadeh (2008), etc.? Our assumption is weaker when the loss is bounded (hence implied by the beta-mixing condition in the papers you mention). For unbounded losses, neither of the assumptions is stronger than the other (i.e., one may have beta mixing without our assumption being satisfied in some cases, but in other cases our assumption may be satisfied without beta mixing). The results provided in our work improve over those of Mohri and Rostamizadeh (2008): those involve a Rademacher complexity term that is often looser in practice than what one can derive from PAC-Bayesian bounds. Q3) Is it possible to get similar results for a more general family of loss functions (for example Lipschitz loss functions)? We can instantiate more examples, essentially all the settings discussed in Lugosi and Neu (2022, 2023), with different choices of the convex functional on the measure space. For instance, note that their Section 3.2 lists numerous generalization bounds, some of which hold for unbounded loss functions. All of these can be instantiated in our setting as well, thanks to the generality of our framework. We will emphasize this in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I am mostly satisfied with the responses. I will be maintaining my current positive rating. As an aside, I would urge the authors to add an exposition on the delayed setting due to importance of the technique in this paper. It might be well-studied (as the authors have claimed) but I do not think it is well-established to the point of being self-explanatory.
Summary: The paper focuses on learning from non-i.i.d data. Specifically, the authors develop a framework that derives generalization guarantees through a reduction to an online learning game with delays, where achieving low regret translates to low generalization error. They present specific bounds when using EWA and FTRL as the online learning algorithms. Additionally, the framework is extended to accommodate dynamic hypotheses. Strengths: The paper is well-written and easy to follow. The proposed framework is general, novel, and elegantly designed, facilitating a clear translation between low regret in online learning algorithms and low generalization error in the context of mixing data. I appreciate the simplicity and flexibility of the framework, and overall, it represents a valuable contribution to the field Weaknesses: While I did not go over the entire details, I did not find any major weaknesses. - A small typo is line 43: "..we propose propose.. " Technical Quality: 4 Clarity: 3 Questions for Authors: Could the authors provide their perspective on potential future directions and limitations of their framework? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1) Could the authors provide their perspective on potential future directions and limitations of their framework? We believe this framework and its flexibility should motivate the investigation of generalization bounds in more general non-i.i.d. settings. There are still many questions not covered in this paper such as the ones raised by other reviewers about considering different assumptions on the mixing process. One limitation of our framework is that it is limited to non-i.i.d. processes that are stationary — but this assumption is necessary to make sure that the test error and the generalization error is well-defined in the first place. In our view, defining notions of generalization without stationarity is the most interesting challenge for further research in this area. We hope that our work can provide interesting insights that may contribute towards achieving this goal.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and constructive feedback on our submission, which we will incorporate to improve our paper. We are glad to see that all reviewers have appreciated our contribution, and in particular the simplicity and generality of our framework. Some reviewers have asked about the technical novelty of our results. In response, let us say that the technical contribution is really as simple as introducing delays into the online learning algorithm to deal with non-stationarity. We can see why this idea may feel natural in hindsight, we wish to take this opportunity to emphasize that it was not obvious a priori that such a simple idea would solve the problem we study in the paper. In fact, after the NeurIPS submission deadline, a concurrent paper appeared on arxiv studying the exact same problem, but without making use of delays: https://arxiv.org/abs/2405.13666 Their analysis is directly inspired by the work of Agarwal and Duchi (2011) on online-to-batch conversions for convex optimization, and required much stronger assumptions than our analysis based on delays. While we still think that this arxiv paper presents an interesting contribution, we believe that it also nicely illustrates how non-trivial and useful the idea of introducing delays into the online-to-PAC framework is. We hope that the reviewers will find this response to be helpful in assessing the value of our contribution.
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper provides a framework for proving generalization bounds for non-i.i.d. data sequences, building upon a recent framework introduced by Lugosi and Neu (2023) that reduces PAC to online learning. This technique recovers some known PAC Bayesian bounds for non-i.i.d. scenarios and various other implications. The original framework by Lugosi and Neu (2023) introduced an online learning "generalization game", where the regret of the online learning algorithm can be translated into a generalization bound in the offline setting. This framework has been shown to recover some important generalization bounds with a clean analysis. In this paper, the generalization game is extended to a game where the learner gets to see the observation with delay (where there's no delay we are back to the original framework). The regret of the online learner in this game can again be translated into a generalization bound. When the delay is large, it increases the regret of the online learner (one term in the generalization bound) but decreases the term determined by the property of "how much the sequence is non-i.i.d.". Online learning with delays has been studied extensively, and so "off-the-shelf" algorithms and regret bounds can be used to derive/recover generalization bounds. One nice application of the technique is that it allows the analysis of stationary mixing processes that have been studied extensively (the assumption on the non-i.i.d. sequences is weaker than known mixing assumptions). Another interesting application is to popular dynamic predictions such as autoregressive models and RNNs. Strengths: Deriving generalization bounds for non-i.i.d. settings is a central effort in the machine learning community and is of great interest. The framework suggested in this paper allows us to do so in a very clean way and might be useful for more applications. Also, the assumption on the non-i.i.d. sequences is quite weak, which is an advantage. Weaknesses: The paper heavily builds on the framework of Lugosi and Neu (2023). This is not a weakness, but my question is: besides extending the online game to accommodate delays and using ideas from the online learning literature, what are the technical challenges/contributions in this paper? Another question - do you know if the paper by Lugosi and Neu (2023) was already published? I'm asking since I didn't go over the proofs in this paper. Technical Quality: 3 Clarity: 4 Questions for Authors: See above. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations are properly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1) besides extending the online game to accommodate delays and using ideas from the online learning literature, what are the technical challenges/contributions in this paper? See our general response to all reviews above. Q2) Do you know if the paper by Lugosi and Neu (2023) was already published? I'm asking since I didn't go over the proofs in this paper? To our knowledge, that paper is under review at a journal. We note that the proofs in our submission are almost entirely self-contained, and the only really important technical result we use from Lugosi & Neu (2023) is Theorem 1, whose simple proof we reproduce in our own Appendix. The rest of the results we cite from that work are mostly standard regret bounds that can be found in many other references on online learning. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I will keep my positive score.
null
null
null
null
null
null
Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization
Accept (poster)
Summary: This paper studies scalable bi-level optimization problems and points out the limitations of most traditional methods. Among all, to mitigate the high memory cost issue of the GU method, it proposes (FG)2U such that the space consumption is reduced to $\mathcal{O}(M)$ from $\mathcal{O}(MN)$. Convergence analysis is provided and extensive experiments show the benefit of the algorithm. Strengths: 1. The presentation is great and I feel relaxed to read this paper. 2. The experiment is extensive including both small and large scale settings. Weaknesses: 1. The memory cost is reduced but the computational cost seems to increase significantly. The computational cost takes the order of $\mathcal{O}(KTN)$ since $b=\mathcal{O}(N)$. This feels unrealistic as well in the large-scale application when $N$ is large. 2. Based on the previous question, I checked the choice of $b$ in the experiment, which is small. How do authors tune this parameter? Is there any ablation study on it? 3. In terms of the discussion in Appendix B.2, IF methods also require $\mathcal{O}(M)$ space using some approximation tricks. There indeed exist approximation errors in IF methods. However, normally, the approximation errors can be controlled to be very small based on the hyperparameter selections. Thus, IF methods seem suitable for scalable bi-level optimization problems to some extent. Do authors have time to include IF methods in Table 2? 4. The algorithm is deterministic without consideration of the batch data. Can authors provide some insights in stochastic settings? For example, can the unbiasedness of the hypergradient estimator still be satisfied, or is there any requirement for the batch size? 5. Can authors provide real space consumption in the comparison (Table 1, Table 2)? Technical Quality: 2 Clarity: 3 Questions for Authors: See weakness part. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the overall positive evaluation and will try to address the concerns raised point by point. **Regarding the complexity** We direct the reviewer to the general response for computational cost analysis and discussions regarding the computation in practice, including the empirical success of FG/ZO in large-scale applications with constant batch size, tricks to reduce the variance, how the parallelizable nature of $\text{(FG)}^2$U fits modern AI computation, and the more cost-effective two-phase methodology in practice. **Regarding the strategy to choose $b$** We follow the methodology [2][3] developed for FG/ZO in large-scale applications to choose $b=\mathcal{O}(1)$ rather than $\mathcal{O}(N)$. We select the largest possible $b$ that does not exceed the GPU memory limit to fully utilize the GPU. See Tables C and D in the attached PDF in the general response for the exact memory consumption. Notice that we additionally perform gradient accumulation through iterative/multi-GPU parallel computation, which will also influence the variance. The number of accumulation steps is tuned in our initial attempts according to the wall-clock efficiency and stability. **Regarding the IF methods** Although the memory efficiency of IF methods makes them suitable for large-scale BO to some extent, there are several inherent limitations associated with the approximation of these methods. As discussed in Appendix B.2, the approximation bias due to unsatisfied KKT conditions cannot be controlled by hyperparameter tuning. Additionally, the challenge of stochastic approximation in IHVP is highlighted in [1]. Specifically, while the Hessian $H$ is approximated via mini-batch sampling, it holds that $H^{-1} = (E[H])^{-1} \neq E[H^{-1}]$. This corresponding bias also cannot be mitigated through hyperparameter tuning. In addition, following the reviewer's suggestion, we conducted additional experiments to complement the findings in Table 2, focusing on the comparison between IF methods and other approaches. We considered two IF methods: Hessian-free and Neumann. The Hessian-free method approximates the inverse Hessian as an identity matrix scaled by a scalar. The Neumann method approximates the inverse Hessian vector product by utilizing the Neumann Series (see Equation 25, noting that the exponent $k$ is missing for $(I - \alpha H)$). Due to the limited time available during the rebuttal period, we focused solely on the StreamingQA dataset. We fixed the base model as GPT2 and the unrolled depth at 48. For the Hessian-free method, the scalar can be merged into the learning rate since the explicit gradient defined in Equation 3 is zero in this case. We selected the learning rate from $[1E-4, 5E-5, 1E-5, 5E-6, 2.5E-6]$, with $5E-6$ yielding the best validation loss. For the Neumann method, we tuned the hyperparameters $\alpha$ and $K$ in Equation 25. We conducted a grid search with $(\alpha, K) \in [1, 0.1, 0.01, 0.001] \times [10, 20, 40]$ and a learning rate of $2.5E-6$. The optimal combination was $\alpha = 0.01$ and $K = 40$. No further performance improvement was observed by tuning the learning rate or increasing $K$ to 80. Please refer to Table A in the attached PDF in the general response for results. We observe the following: (1) Both IF methods yield improvements on RGU (with an unrolled depth of 6). (2) By carefully tuning the hyperparameters, the Neumann method achieves further improvements over the Hessian-Free method, albeit with additional computational cost. (3) Despite the careful hyperparameter tuning for Neumann, its performance remains inferior to $\text{(FG)}^2$U. **Regarding the stochastic $\text{(FG)}^2$U** In the stochastic context, the bilevel optimization problem is formulated as: $$\min_{\phi\in \Phi}\ h(\phi):=f(\theta_T(\phi), \phi)=E_{\xi}[F(\theta_T(\phi), \phi; \xi)]$$ $$where\ \ \ \theta_0(\phi) = \Omega_0(\phi), \theta_t(\phi) = \Omega_t(\theta_{t-1}(\phi), \phi; \zeta)\in\Theta, t = 1,...,T, $$ where $\xi$, $\zeta$ are random variables. Assume that the sampled dataset for the meta-objective and the lower-level objective are respectively $D_{F}$ and $D_{G}$, related to random variables $\xi$ and $\zeta$. We have the hypergradient under stochastic context: $$\nabla_\phi ' h(\phi) = \frac{\partial F(\theta_T(\phi;D_G), \phi;D_F)}{\partial \theta_T} \frac{d\theta_T(\phi,D_G)}{d\phi} + {\frac{\partial F(\theta_T(\phi;D_G), \phi;D_F)}{\partial \phi}}.$$ With Forward Gradient, the estimation of hypergradient would be $\nabla_\phi ' h(\phi) vv^{\top}$. Subsequently, from the independence between the forward gradient vectors $v$ and the batch sampling, we have $$E[\nabla_\phi ' h(\phi)] = E_{\xi,\zeta}[E_{v}[\nabla_\phi ' h(\phi)vv^{\top}]]= E_{\xi,\zeta}[\nabla_\phi ' h(\phi)]$$ $$=E_{\xi,\zeta}\left[\frac{\partial F(\theta_T(\phi;D_G), \phi;D_F)}{\partial \theta_T} \frac{d\theta_T(\phi,D_G)}{d\phi} + {\frac{\partial F(\theta_T(\phi;D_G), \phi;D_F)}{\partial \phi}}\right]$$ $$= \frac{\partial f(\theta_T(\phi), \phi)}{\partial \theta_T} \frac{d\theta_T(\phi)}{d\phi}+ \frac{\partial f(\theta_T(\phi), \phi)}{\partial \phi}=\nabla_\phi h(\phi),$$ which gives the unbiasedness of $\text{(FG)}^2$U for stochastic bilevel optimization problems. **Regarding the real space consumption** We direct the reviewer to Tables B, C, and D in the attached PDF in the general response. [1] Making Scalable Meta Learning Practical, https://arxiv.org/abs/2310.05674 [2] Fine-Tuning Language Models with Just Forward Passes, https://arxiv.org/abs/2305.17333 [3] Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark, https://arxiv.org/abs/2402.11592 --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I thank the authors for their response. Some of my concerns have been addressed. However, I am still slightly concerned about the theoretical guarantee on the problem dimension. There are also some works on bilevel optimization using zeroth-order types of methods. Maybe the authors would like to include them and provide a comparison. I keep my current score. Best, Reviewer --- Reply to Comment 1.1.1: Title: Thanks again for your valuable feedback Comment: Thank you for your response. We regret to hear that some concerns remain. Regarding the convergence guarantee dependent on the problem dimension, we acknowledge that further improvement of the theoretical results would require assumptions that may be impractical. In the field of backpropagation-free optimization (FG/ZO), the gap between dimension-dependent theoretical results and the significantly positive empirical outcomes in large-scale cases remains an open question. We hope that future research will help bridge this gap. Regarding BO + ZO, we followed the reviewer’s suggestions and identified several related works [1][2][3]. Given the limited time remaining in the discussion period, we are unable to conduct a comprehensive empirical study of these methods. However, we offer some preliminary comments here and will seriously consider incorporating comparisons in our revised paper. 1. [1][3] propose utilizing zeroth-order Hessian/Jacobian approximations for IF-based methods, whereas our work focuses on GU-based methods. It is important to note that zeroth-order approximation cannot eliminate the inherent bias introduced by IF-based methods, and the theoretical guarantees provided in these works are also dimension-dependent. 2. [2] employs zeroth-order optimization in a GU manner. However, [2] is limited to Neural Architecture Search rather than universal BO. Additionally, the inner problem considered in [2] is differentiable (white box), while our exploration of $($FG$)^2$U-ZO is in the more challenging black-box setting. We would like to once again extend our sincere thanks for your valuable feedback. We remain open to answering any further questions during the remaining time of the discussion period. $ $ [1] On the Convergence Theory for Hessian-Free Bilevel Algorithms, https://arxiv.org/abs/2110.07004 [2] ZARTS: On Zero-order Optimization for Neural Architecture Search, https://arxiv.org/abs/2110.04743 [3] Fully Zeroth-Order Bilevel Programming via Gaussian Smoothing, https://arxiv.org/pdf/2404.00158
Summary: The paper introduces a method called Forward Gradient Unrolling with Forward Gradient, abbreviated as (FG)²U, which is designed to address the large memory requirements of forward method in bi-level optimization in large-scale machine learning model Strengths: 1. The method significantly reduces memory overhead compared to traditional gradient unrolling methods, making it suitable for large-scale applications. 2. Can be easily implemented within popular deep learning frameworks and adapted to various optimization problems. Weaknesses: 1. The proposed method introduces additional computational complexity. Can the author give an analysis of the complexity? 2. The convergences analysis is for the problem (2) not the original problem (1). 3. More large scale datasets are needed. Technical Quality: 2 Clarity: 3 Questions for Authors: see weakness Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We will try to address the concerns point by point. **Regarding the computational complexity** We direct the reviewer to our general response. We place a computational cost analysis in A.1 and a more detailed discussion on practical computation in A.2. **Regarding the convergence of problem (1)** To extend the convergence of optimization problem (2) into (1), we need to assume that the lower-level objective function $g$ is strongly convex w.r.t. $\theta$ as commonly done by previous works [2][3]. From the strong convexity and first-order smoothness (Assumption 3.2) of $g$, we have 1) the zeroth and first-order smoothness of $\theta^*(\phi)$; 2) $\|\theta_T(\phi')-\theta^*(\phi')\|\to 0$ as $T\to +\infty$. Then the inequality $$\|\theta_T(\phi)-\theta_T(\phi')\|\leq \|\theta_T(\phi)-\theta^*(\phi)\|+\|\theta_T(\phi')-\theta^*(\phi')\|+\|\theta^*(\phi)-\theta^*(\phi')\|$$ implies the the zeroth and first-order smoothness of $\theta_T(\phi)$. Following the same line of proof as presented in our paper, we derive the smoothness of $f(\theta^*(\phi),\phi)$ and $f(\theta_T(\phi),\phi)$, and subsequently the convergence of either problem (1) or (2). However, as discussed in lines 73 to 77, given that the scope of this paper is large-scale BO, the inner optimization typically involves deep neural networks. Therefore, the optimal parameters are not explicitly accessible and can only be estimated through iterative procedures. Most related works [1][2][3] are implicitly or explicitly solving (2) instead of (1). Additionally, it is important to acknowledge that achieving strong convexity is often unfeasible in practical applications. Consequently, in this paper, we aim to present a more practical convergence theory that proves the effectiveness of our method. **Regarding larger-scale datasets** As discussed in Appendix H, this work serves as the first attempt to apply FG/ZO in BO. While the scale of cases studied in this paper is relatively small compared to the most powerful generative models in the industry, it is comparable to recent large-scale BO works such as [1] and significantly larger than traditional BO works. We believe the empirical studies conducted in this paper are sufficient to demonstrate the potential of $\text{(FG)}^2$U in large-scale BO. We anticipate that the effectiveness of $\text{(FG)}^2$U will be further validated in larger scales. [1] Making Scalable Meta Learning Practical, https://arxiv.org/abs/2310.05674 [2] Truncated Back-propagation for Bilevel Optimization, https://arxiv.org/abs/1810.10667 [3] Bilevel Optimization: Convergence Analysis and Enhanced Design, https://arxiv.org/abs/2010.07962 --- Rebuttal Comment 1.1: Title: Response to author Comment: Thank you for the rebuttal, the authors have addressed all my concerns. I will increase my score. --- Reply to Comment 1.1.1: Title: Thanks again Comment: We are pleased to hear that our response has addressed your concerns. We would like to extend our sincere thanks for your valuable feedback again.
Summary: This paper presents a novel gradient unrolling algorithm for bi-level optimization. The authors highlight that existing methods for calculating meta gradients in the literature are not memory efficient. They propose a sampling-based method, (FG)^2U, to estimate the meta gradient. This approach approximates the meta gradient by multiplying it with a random rank-1 matrix, thereby simplifying the Forward Gradient Unrolling (FGU) scheme. The paper includes discussions on sampling efficiency, convergence analysis, and numerical experiments for (FG)^2U. Strengths: - The proposed method (FG)^2U is simple and easy to implement. - The writing is clear and easy to follow. - The theoretical results clearly demonstrate the relationship between convergence, sampling batch size, and parameter space size. - The three experiments, data condensation, LM fine-tuning, and PDE recovery, show the wide range of applications of bi-level optimization and the proposed method (FG)^2U. Weaknesses: A major concern is the theoretical dependence on the parameter dimension $N$. Theorem 3.4 indicates that the convergence rate depends linearly on N. It means that - With a fixed batch size, convergence on large-scale applications would be slow (vanilla (FG)^2U without additional techniques). - With an $O(N)$ batch size, convergence is satisfactory, but the memory cost grows as $O(MN)$, similar to FGU. - While the authors mention that (FG)^2U allows parallelization to mitigate computational overhead, it seems that the calculations of FGU (4) also permit parallelization, correct? Specifically, $d \theta / d \phi$ can be computed in parallel as $d \theta[1] / d \phi, d \theta[2] / d \phi, d \theta[3] / d \phi, \cdots$ Technical Quality: 3 Clarity: 4 Questions for Authors: Regarding the formula for data condensation (17) and related formulas in the appendix: should it be minimizing over ${\mathcal{D}_c}$ instead of minimizing over ${\mathcal{D}_o}$? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The paper appears to have no potential negative societal impact. The authors discussed the limitations in the Appendix H. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the overall positive evaluation and will try to address the concerns raised point by point. **Regarding the complexity** The reviewer's understanding of Theorem 3.4 is generally correct. We would like to highlight the following points: Firstly, the convergence rate in Theorem 3.4 is an upper bound. We direct the reviewer to our general response (A.2) for a more detailed discussion on the choice of $b$ and the computation in practice, covering the empirical success of FG/ZO in large-scale applications with constant batch sizes, techniques to reduce variance, the parallelizable nature of $\text{(FG)}^2$U which aligns well with modern AI computation, and the more cost-effective two-phase methodology in practice. Secondly, the $\mathcal{O}(bM)$ memory cost with batch size $b$ can be reduced to $\mathcal{O}(M)$ through serial/parallel accumulation. In other words, we do not have to compute the full batch on a single GPU simultaneously. In practice, the memory cost of $\text{(FG)}^2\text{U}$ can be more manageable than FGU and RGU by choosing $b$ according to the hardware. It is important to note that both serial and parallel accumulation are not trivial for FGU and RGU. Thirdly, the parallelization method for FGU (Plan A) proposed by the reviewer can be expressed as $d\theta / d\phi = Z = I Z = \sum_i^M \text{diag}(e_i) Z = \sum_i^M e_ie_i^T Z$. As mentioned in Line 122, a special choice of $v$ is the normalized orthogonal coordinates (Plan B), which leads to $Z vv^T = N Z e_j e^T_j$, where $j$ is a random index from $Unif([0,\ldots,N])$. The differences are: (1) Plan A maintains the rows of $Z$, while Plan B maintains the columns. (2) Plan B utilizes randomization, while Plan A does not. We argue that both choices of Plan B are better: (1) Considering the update rule in Equation 4, row-wise updates require communication among threads, whereas column-wise updates do not. (2) Plan A requires exactly $M$ threads, whereas Plan B provides smooth trade-offs via randomization. **Regarding the formula for data condensation** Yes, the reviewer is correct. We will fix the typo in the revised manuscript. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I greatly appreciate the efforts made by the authors during the rebuttal phase, including their responses to my questions and the inclusion of additional experimental results. However, my primary theoretical concern, which I raised in the initial review, remains only partially addressed. The statement "Plan A requires exactly $M$ threads" may not be entirely accurate. It is possible to adaptively allocate the $M$ calculation tasks based on the constraints of the hardware. For instance, if $M=5$, one could opt for 3 threads and distribute the tasks as $[1,1,3]$, rather than being limited to $[1,1,1,1,1]$. This flexibility also casts doubt on the assertion that "the memory cost of $\text{(FG)}^2\text{U}$ can be more manageable than FGU and RGU," as one could potentially implement simple variants of FGU or RGU. Of course, as the authors have noted, these analyses are merely upper bounds. The actual performance in the experiments presented by the authors appears promising. Therefore, I will keep my positive score. --- Reply to Comment 1.1.1: Title: Thanks again for your valuable feedback Comment: Thank you for your response and for the recognition of our work. Regarding the convergence guarantee dependent on the problem dimension, we acknowledge that further improvement of the theoretical results would require assumptions that may be impractical. In the field of backpropagation-free optimization (FG/ZO), the gap between dimension-dependent theoretical results and the significantly positive empirical outcomes in large-scale cases remains an open question. We hope that future research will help bridge this gap. Regarding the parallelization plan, the essential information we intended to convey in our rebuttal response is that "a special parallelization plan is covered by $($FG$)^2$U as a universal framework." We acknowledge that we may have overstated the advantages of $($FG$)^2$U over FGU and RGU in terms of memory management, as pointed out by the reviewer. We will exercise caution with related statements in our revised paper. We would like to once again extend our sincere thanks for your valuable feedback. We remain open to answering any further questions during the remaining time of the discussion period.
null
null
Rebuttal 1: Rebuttal: ## (A) General Response We thank all reviewers for their constructive feedback. In this response, we will address some common concerns raised by the reviewers. We will revise the manuscript covering the following discussions. All reviewers raised concerns about the computation. More specifically, Reviewer 4Mx8 specifically requested an analysis of the complexity. Reviewer cwe5 and Reviewer J9FE raised concerns about the dimension-dependent convergence rate of forward gradient (FG). We will begin with a theoretical computation cost analysis of GU-based methods, then move to A.2 for discussions on practical computation. ### (A.1) Theoretical Computational Cost We follow existing works [4][5] in treating the transitions $\Omega_t: \theta_{t-1} \rightarrow \theta_{t}$ as indivisible computational units and conduct the computational cost analysis by focusing on Jacobian (jac), Jacobian-vector product (jvp), Hessian-vector product (hvp), vector-Jacobian product (vjp), vector-Hessian product (vhp), and Hessian-Jacobian product (hjp) operations around $\Omega$. The gradient computation costs are as follows: (i) FGU involves $T$ hjp and $T$ jac operations (Equation 4). (ii) RGU involves $T$ vjp and $T$ vhp operations (Equation 6). (iii) $\text{(FG)}^2$U involves $bT$ hvp and $bT$ jvp operations (Equation 9). Overall, the computational complexities of FGU, RGU, and $\text{(FG)}^2$U are $\mathcal{O}(MNT)$, $\mathcal{O}(MT)$, and $\mathcal{O}(bMT)$, respectively. Here are some discussions: (a) $b=\mathcal{O}(N)$ or $\frac{N}{b}$ times updates are required to achieve convergence for $\text{(FG)}^2$U. The total computation will be $\mathcal{O}(MNT)$. (b) Notice (a) is about the theoretical upper bound, we will further discuss the computation cost in practice in (A.2). (c) RGU is the most computationally efficient. However, as discussed in lines 105-111, memory issues impede RGU in large-scale scenarios. $\text{(FG)}^2$U improves memory performance at the cost of increased computation. $ $ ### (A.2) Practical Computational Cost **Firstly, the dimension-dependent convergence rate is an upper bound and the scalability has proven acceptable in practice**. As evidence, zeroth-order optimization (ZO), which can be regarded as a finite difference approximation of FG, has been used as a standard technique for fine-tuning large language models (LLMs) [1][2]. The size of the LLMs studied is up to 66B, and the fine-tuning performance is competitive with full-parameter fine-tuning, with $b=\mathcal{O}(1)$ ($b=1$ in [1] and $b=16$ in [2]). **Secondly, the variance of the FG/ZO can be reduced by various tricks.** [2] explored ZO + gradient pruning, which reduced the effective number of dimensions based on the lottery hyperthesis. [3] propose ZO + SVRG to reduce the variance. We didn't explore these tricks in our initial attempt to apply FG/ZO in BO, but we believe these are promising directions for our future works to scale up $\text{(FG)}^2$U and $\text{(FG)}^2$U-ZO, as we have discussed in Appendix H. **Further, $\text{(FG)}^2$U is suitable for modern AI computation due to its parallelizable nature.** With the practice of scaling laws and the development of AI infrastructure, the computational cost concerns of large-scale models are mitigated when efficient parallelization is available. A vivid example is Transformer, whose quadratic complexity in sequence length is less favorable compared to RNNs, yet it has achieved impressive empirical success. The key to scaling up the Transformer lies in the parallelizable nature of attention. Similarly, the inherently parallelizable and hardware-friendly nature of $\text{(FG)}^2$U enables it to leverage large-scale distributed computing resources (as mentioned in lines 146-148) within popular deep learning frameworks (as discussed in lines 196-204), with minimal engineering effort. **Furthermore, the more cost-effective two-phase methodology can be utilized to reduce the overall computational expense.** Considering the computational cost, we do not recommend using $\text{(FG)}^2$U from scratch, as discussed in Sec. 3.2. Instead, we advocate for a two-phase methodology: initially, employing efficient yet less accurate gradient approximation methods, such as TRGU or Hessian-Free, and subsequently using $\text{(FG)}^2$U for more accurate, albeit less efficient, gradient approximation to further enhance performance. Finally, we want to emphasize the scope of this work: $\text{(FG)}^2$U is intended to complement, rather than overturn, the existing methodology of BO. The trade-off between computation and performance is a perpetual theme in computer science. Within the BO community, previous works have tended to sacrifice performance for reduced computational costs. We believe $\text{(FG)}^2$U will bring new insights to the BO community, prompting a reconsideration of this trade-off, especially in the current era of rapidly developing AI infrastructure and scaling law methodologies. [1] Fine-Tuning Language Models with Just Forward Passes, https://arxiv.org/abs/2305.17333 [2] Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark, https://arxiv.org/abs/2402.11592 [3] Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models, https://arxiv.org/abs/2404.08080v1 [4] Truncated Back-propagation for Bilevel Optimization, https://arxiv.org/abs/1810.10667 [5] Bilevel Optimization: Convergence Analysis and Enhanced Design, https://arxiv.org/abs/2010.07962 ## (B) Additional Experiments Please refer the the attached PDF for additional results, including comparisons to additional baselines and real memory consumptions. Pdf: /pdf/5aae6ba8b4c683766dcdd7a6139f95cef7792ca8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Estimating Ego-Body Pose from Doubly Sparse Egocentric Video Data
Accept (poster)
Summary: This paper presents a framework to estimate full-body human poses from egocentric head mounted display. The input of the system mainly contains 2 parts: head tracking signal given by HMD, and sparse hand pose signal estimated from egocentric video. The algorithm, DSPoser, is composed of two stages: temporal completion and spatial completion. In the temporal completion stage, DSPoser uses an encoder-decoder to generate the gaussian distribution of hand pose states. In spatial completion stage, DSPoser generates fullbody poses from hand trajectories and head tracking signals. Experiments were performed on Ego-Exo4D and AMASS dataset. Strengths: (1) The quantitative results are excellent, especially for doubly sparse data. Table.1 and Table.2 supports the effectiveness of the proposed method. (2) The overall paper writing is clear. (3) Limitations are well-explained. (4) The ablation study about aleatoric and epistemic uncertainties is interesting to me. Weaknesses: (1) The novelty of “newly introduced task (L.205)” is a bit limited. Both motion imputation and pose completion are widely investigated problems, and the combination of both problems seems not difficult to solve. (2) The paper only proves that the proposed hand trajectory imputation is better than linear interpolation for the doubly sparse task. As the motion imputation/interpolation/inbetweening is a long-standing task, the contribution of the proposed hand imputation method (Uncertainty-aware MAE) is not clear. Technical Quality: 4 Clarity: 4 Questions for Authors: (1) For hand pose estimation, why use FrankMocap instead of recently introduced ACR (ACR: Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction [CVPR 2023]) or IntagHand (Interacting Attention Graph for Single Image Two-Hand Reconstruction [CVPR 2022]), which are designed for hand mesh recovery only? Does the performance of hand mesh recovery affect the final full-body pose estimation? (2) May the training on AMASS benefit Ego-Exo4D performance? (3) In Figure 3(b), why not demonstrate hand pose here? (4) Sec.3.3 is titled with "imputed hand trajectories and head tracking signal". However, L.146 said "from imputed hand trajectories" and did not mention "head tracking signal" in this section at all. Minor questions: In table.1, “x” should also be briefly explained like “y”. At L.81, V_1 … V_T_w should be defined (RGB images?) Definition of T_w (L. 114) is better moved to its first appearance (L.81) Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Limitations are clearly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the valuable comments aimed at improving our paper. We will revise the draft according to the reviewer's suggestions. > Q1: Limited novelty Our novelty lies in our approach to solving the body pose estimation problem given doubly sparse data, specifically in how we address the under-constrained problem by measuring and exploiting uncertainty. Previous methods rely heavily on dense hand signals, requiring hand controllers for ego-body pose estimation. Another approach that only uses head poses to estimate the whole body does not utilize hand pose information. Our proposed solution strikes a novel balance between these two approaches, eliminating the need for hand controllers while achieving better results by incorporating a few constraints from detected hand poses. > Q2: The effectiveness of Uncertainty Aware MAE As the reviewer noted, numerous works have addressed trajectory imputation. However, imputing the hand trajectory itself is not our primary task or the focus of our novelty. Our design of MAE aims to capture uncertainty while imputing the hand trajectory, which differs from other imputation methods such as mask token, in-betweening, and interpolation. Additionally, we introduced a couple of ways to utilize this uncertainty —sampling, dropout, and distribution embedding— in a diffusion model while spatially completing the full body. Given that our newly introduced task of estimating ego-body pose from doubly sparse video data is an under-constrained problem, one of our key motivations is to leverage the "uncertainty" that arises from this under-constrained data. > Q3: Hand pose estimation module and its effectivness \begin{array}{l|cc} \hline \textbf{Methods} & \textbf{MPJPE} & \textbf{MPJVE} \\\\ \hline \hline \textbf{FrankMocap} & 16.84 \pm 0.04 & 39.86 \pm 0.05 \\\\ \textbf{ACR} & 16.69 \pm 0.05 & 40.12 \pm 0.06 \\\\ \textbf{Hand Ground Truth} & 16.43 \pm 0.02 & 37.49 \pm 0.04 \\\\ \hline \end{array} We agree that recently introduced hand models can be used instead of FrankMocap. Before we submitted the paper, we compared the results of FrankMocap with the ground truth 3D hand joint location of Ego-Exo4D and concluded that the effect of hand detectors is not significant. We believe this is because we utilized only the wrist 3D location from the detected hand. Therefore, even though FrankMocap was our initial choice only to prove the concept, we decided not to replace it. We visulized a table of comparison for the different inputs of the hand detector, and it shows that the performance differnece is not significant from ground truth and other hand detector models. > Q4: Training on AMASS benefits Ego-Exo4D performance? We greatly appreciate the suggestion of applying transfer learning to the Ego-Exo4D dataset. The Ego-Exo4D paper's baseline implementation of EgoEgo [1] took the second-place in the Ego Body Pose Estimation Challenge. This implementation has demonstrated the benefits of training on the AMASS dataset for improving performance on Ego-Exo4D. This method employs a conditional diffusion model, cross-attention for conditioning, and rotary positional embeddings with SLAM pose input. We recognize the potential of this approach and intend to explore its application in our future work on this task. [1] Li, Jiaman, Karen Liu, and Jiajun Wu. "Ego-body pose estimation via ego-head pose estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. > Q5: Hand visualization on Figure 3-(c) We appreciate the reviewer's feedback. While our primary focus is on body pose estimation, rather than hand pose estimation, we understand the importance of comprehensive visualization. Therefore, we can provide visualizations of the canonical hand poses, similar to the approach we used for mesh visualization in Fig 3-(c), to enhance clarity and completeness. > Q6: Writing clarity regarding our constraint in Section 3.3. Thank you for the valuable feedback. We acknowledge the oversight in Section 3.3, where the title mentions both "imputed hand trajectories and head tracking signal," but the text only references "imputed hand trajectories." We will clarify our method by including a head tracking signal in this section to ensure consistency and completeness. > Q7: Suggestions for the clarify. Thank you for pointing out the oversights. Following the reviewer's comments, we will make the following revisions to enhance clarity: 1. Add a brief explanation of $x$ in Table 1. 2. Clarify the dimension of the RGB video data. 3. Move $T_w$​ to its first appearance in the text. These changes will help ensure the information is clear and easy to understand. --- Rebuttal Comment 1.1: Comment: The authors' responses are adequate to solve most of my concerns. I would like to keep my initial rating because I still think the proposed task is a bit incremental. Both technical descriptions and evaluations are satisfied. --- Reply to Comment 1.1.1: Comment: Thank you for your thorough review and for acknowledging that our responses addressed most of your concerns. We appreciate your thoughtful consideration and respect your decision to maintain your initial rating. Once again, thank you for your valuable feedback and for taking the time to review our work.
Summary: This paper proposes a system to estimate full-body pose from forward-facing egocentric videos. Dubbed “doubly sparse video data,” such data streams have the distinct characteristic that only the headset pose is persistent, while the hand pose estimation is only occasionally available. The proposed method first infills the hand motion from estimated hand motion from video information, and then uses the infilled motion (with estimated uncertainty) to estimate the full-body pose. The full-body pose estimation is built upon VQ-VAE representation and VQ-diffusion. Experiments show that the proposed pipeline outperforms SOTA methods. Strengths: - This work is very well-motivated; estimating the full-body pose from sparse egocentric views could have many applications in AR/VR and animation. The task is also very challenging, as hands are only visible in very few frames. - The proposed system is a complete solution to estimate full-body pose (including fingers) from egocentric videos and head tracking. It leverages the persistent signals (headset tracking) and occasional signals (hand pose) well by formulating it as a probabilistic infilling problem. The infilled hand motion then serves as input to a diffusion-based full-body pose estimator. - I find the uncertainty formulation a great addition to the current literature. While most methods just use diffusion-based pose estimation plus masking, the proposed MAE solution seems to be a principled way of obtaining a better hand pose trajectory based on sparse input. - Experimental results on the Ego-Exo-4D dataset and AMASS show that the proposed method outperforms SOTA methods. The evaluation is extensive and shows the results from the method well. Showing the results of using dense information (Table 3) also demonstrates the strength of the proposed method. Weaknesses: - For pose estimation, it would be very beneficial to provide estimated motion as videos to better judge the quality of the estimated pose and motion. Not providing videos weakens this work. - Since a complex system is proposed, it would be great to see some ablation about VQ-VAE. - Looking at Figure 7 in the appendix, it appears to me that the proposed method could be overfitting. There is no information on kicking the feet up for the human, but the estimated pose is kicking the feet up. Technical Quality: 3 Clarity: 3 Questions for Authors: If possible, some metric on the velocity/acceleration error of the estimated motion would help indicate the smoothness of the motion. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreicate the acknowledgement of our motivation and the novelty of our method. We will improve the draft based on the valuable comments of the reviewer. > Q1: Qualitative comparisons against state-of-the-art approaches and video visualizations of the results. We appreciate your comment on the importance of qualitative comparisons. While we have visualized our qualitative results in a video format, we found that it is not permitted to include links during the rebuttal phase. As an alternative, we have included detailed comparative qualitative results in the PDF file attached to this rebuttal. We hope this additional information provides clarity and supports the quantitative improvements we reported. We also ensure that we will add video comparisons against state-of-the-art methods in the final version. > Q2: VQ-VAE ablation studies. \begin{array}{l|c|cccc} \hline \textbf{Methods} & \text{Pipeline} & \textbf{MPJPE} & \textbf{MPJVE} & \textbf{MPJRE} & \textbf{Jitter} \\\\ \hline \hline \text{BoDiffusion [3]} & \text{MAE}  + \text{Skeleton Space Diffusion} & 7.35 & 31.33 & 5.47 & 1254.84 \\\\ \text{AvatarJLM [34]} & \text{MAE} + \text{Transformer} & 7.12 & 37.60 & 5.24 & 16.95 \\\\ \textbf{DSPoser (ours)} & \text{MAE} + \text{VQ-Diffusion} + \text{VQ-Decoder} &5.51 & 24.19 & 4.09 & 4.27 \\\\ \hline \end{array} To evaluate the effectiveness of our pipeline, we implemented baseline models to tackle the pose estimation problem given doubly sparse data. For these baseline models, we introduced MAE at the initial stage of their methods to complete the temporally sparse data, then fed the imputed trajectory into their respective models. The implementation of BoDiffusion as a baseline can serve as an ablation study for our VQ-VAE, as it applies the diffusion process directly to the skeleton space, contrasting with our approach of applying the diffusion process on the Vector-Quantized latent space. Additionally, AvatarJLM can be considered another ablation study, as it utilizes a Transformer instead of a diffusion model to complete the sparse data. These results demonstrate that VQ-VAE outperforms the other architectural options. > Q3: Overfitting issue. Thank you for your observation and insightful feedback. While it is possible that overfitting could explain this observation, there are other plausible explanations. To the best of my knowledge, when a human moves, the motion of each joint influences the others. Previous research, such as [1], [2], has demonstrated that joint movements are not only connected to adjacent joints through bones (explicit relationships) but also highly related to distant joints that are not directly connected (implicit relationships) in a certain motion context. Additionally, in [3], even though only head position is utilized for whole body estimation, the results often show very accurate lower body movements. This indicates that even if there appears to be no direct information for reconstructing the kicking motion in our visualized example, the intermittent hand observations and dense head trajectory data may provide sufficient information to reconstruct the kicking motion. Therefore, what may seem like overfitting could actually be the model leveraging these implicit relationships to generate a plausible motion sequence. [1] Chi, Hyung-gun, et al. "Infogcn: Representation learning for human skeleton-based action recognition." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [2] Wu, Zhize, et al. "SelfGCN: Graph Convolution Network with Self-Attention for Skeleton-based Action Recognition." IEEE Transactions on Image Processing (2024). [3] Li, Jiaman, Karen Liu, and Jiajun Wu. "Ego-body pose estimation via ego-head pose estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. > Q4: Evaluation on the velocity/acceleration metric. \begin{array}{l|cccc} \hline \textbf{Methods}&\text{MPJPE}&\text{MPJVE}&\text{MPJRE}&\text{Jitter} \\\\ \hline \text{AvatarPoser [10]}&40.42&64.07&16.37&27.89 \\\\ \text{Bodiffusion [3]}&46.45&75.33&17.99&2793.32 \\\\ \text{AvatarJLM [34]}&25.02&68.42&14.14&32.18 \\\\ \text{AvatarPoser [10]}&9.88&62.31&5.98&37.89 \\\\ \text{Bodiffusion [3]}&7.35&31.33&5.47&1254.84 \\\\ \text{AvatarJLM [34]}&7.12&37.60&5.24&16.95 \\\\\hline \textbf{DSPoser (Ours)}&\textbf{5.51}\pm0.02&\textbf{24.19}\pm0.10&\textbf{4.09}\pm0.02&\textbf{4.27}\pm0.03 \\\\\hline \end{array} \begin{array}{l|c|cccc} \hline \text{Methods} & \mathbf{y} & \text{MPJPE} & \text{MPJVE} & \text{MPJRE} & \text{Jitter} \\\\ \hline \hline \text{GT} & - & - & - & - & 4.01 \\\\ \text{VQ-VAE (Paper)} & \text{Full body} & 1.26 & 11.37 & 1.81 & 3.93 \\\\ \text{VQ-VAE (Opt'ed)} & \text{Full body} & 1.15 & 10.59 & 1.67 & 3.89 \\\\ \hline \text{AvatarPoser [10]} & \text{Dense traj.} & 4.18 & 29.40 & 3.21 & 13.63 \\\\ \text{Bodiffusion [3]} & \text{Dense traj.} & 3.63 & \mathbf{14.39} & \mathbf{\textcolor{blue}{2.70}} & 493.78 \\\\ \text{AvatarJLM [34]} & \text{Dense traj.} & \mathbf{3.35} & 20.79 & 2.90 & \mathbf{\textcolor{blue}{8.39}} \\\\ \hline \mathbf{DSPoser (Paper)} & \text{Dense traj.} & 3.61 \pm 0.01 & 18.36 \pm 0.03 & 2.81 \pm 0.02 & 4.08 \pm 0.02 \\\\ \mathbf{DSPoser (Opt'ed)} & \text{Dense traj.} & \mathbf{\textcolor{blue}{3.48 \pm 0.01}} & \mathbf{\textcolor{blue}{17.86 \pm 0.03}} & \mathbf{2.68 \pm 0.02} & \mathbf{4.03 \pm 0.02} \\\\ \hline \end{array} We deeply appreciate your valuable comments on our paper. Thanks to your suggestion, we found out that our method shows significantly better performance on the Jitter metric, which is often used to measure the smoothness of motion. Jitter is a measure of jerk calculated by the derivative of acceleration, and MPJVE, already reported in our paper, indicates the velocity error. As seen in the table above and Q2 of the VQ-VAE ablation studies, it is demonstrated that our method produces smoother results compared to other methods. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: I thank the authors for the detailed response and additional experiments. My concerns are addressed and I would like to raise my score to Accept. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and for taking the time to review our additional experiments. We appreciate your support and are pleased that our response addressed your concerns. We're grateful for your decision to raise the score.
Summary: The paper introduces the task of full-body pose estimation from temporally and spatially sparse tracking inputs. It differs from the prior work in assuming only the partial availability of hand tracking, which is a common scenario for head-mounted displays (HMD) without hand controllers. To address this problem, a two-stage approach is proposed. First, a masked autoencoder (MAE) infills the missing hand joints along with an uncertainty prediction using only the available frames and the head tracking. Then, the imputed tracking data (hands and head) are passed to a VQ-Diffusion model to predict the remaining body. The proposed model, namely DSPoser, is evaluated on the AMASS and Ego-Exo4D datasets where it performs better than the baselines in this new problem setting. Strengths: Originality: The paper introduces a new challenge to the full body tracking domain, which has become an active research area due to the increasing number of HMD devices in the market. The proposed solution combines techniques from various works effectively. Quality: The proposed two-stage approach is practical. The masked autoencoder with uncertainty estimation addresses the temporal sparsity problem and decouples the formerly known spatially sparse body tracking (i.e., hands and head are always available) from the temporal sparse setting. Clarity: The paper is well-organized and easy to follow. There is enough background information to understand the proposed method and make connections to the prior works. The authors also provide experimental details thoroughly which seem to be sufficient for reproducibility.  Significance: The new problem setting is novel and I expect it to be more commonly addressed in the future. Hence, this paper could be a reference for future works. Weaknesses: 1- I think the evaluations in Tables 1 and 2 could be better structured and also more fair. To make an apples-to-apples comparison, it would be better to group methods using a particular type of input data. For example, EgoEgo should be compared against the DSPoser with only the head tracking inputs. Similarly, the Bodiffusion could also use the MAE imputation. Considering that the VQ-Diffusion and the motion tokenizer are taken from the prior work, MAE as being the main contribution could be better highlighted in this way. The “naive Bodiffusion extension” is simply too naive. Stronger baselines could be introduced. 2- The runtime performance analysis is missing. Considering that the proposed problem setting aims for real-time applications, a masked autoencoder with additional uncertainty computations followed by a diffusion inference is not the optimal candidate. I acknowledge that this is covered in the limitations section. What could be done about it? 3- After reading the “Uncertainty-aware MAE” section, I assumed the total uncertainty (Eq. 6) is proposed. The ablation study, however, reveals that the aleatoric uncertainty gives better performance. Is it the one used in the experiments? 4- A supplementary video with qualitative comparisons would be very helpful. 5- [1*] and [2*] (as concurrent work) could also be covered in the related work section. These are not weaknesses but suggestions: - I think mixture density networks [3*] could be applied in this setting which would also simplify the story around the uncertainty. The network predicts parameters of a Gaussian Mixture Model, not very different from the current aleatoric uncertainty. - Line #243: “The dropout strategy achieves...” -> “The sampling strategy achieves...” ``` [1*] Du, Yuming, et al. "Avatars grow legs: Generating smooth human motion from sparse tracking inputs with diffusion model." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [2*] Dai, Peng, et al. "HMD-Poser: On-Device Real-time Human Motion Tracking from Scalable Sparse Observations." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3*] Bishop, Christopher M. "Mixture density networks." (1994). ``` Technical Quality: 3 Clarity: 3 Questions for Authors: Please see my questions in the previous section and address my main concern on the evaluations. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewers insightful feedback on our paper. We will improve the draft based on the comments. > Q1: Stronger baselines & Reorganization of Table 1 and 2 Due to limited time and resources, we are currently only able to provide baseline results on the AMASS dataset. We are currently working on training basline methods on Ego-Exo4D datasets, and we will update as soon as they are completed. Please refer to Q1 in the Author Rebuttal to see our response on stronger baselines. Thank you for your valuable feedback. We appreciate the suggestions for improving the structure and fairness of our evaluations. Since we now have more baselines as shown in the Table A in the pdf, it will be better to re-organize the table for a fair comparison and easier understanding. > Q2: The runtime performance analysis, and possible direction. Please refer to the Q2 of the Author Rebuttal. Thank you for your feedback regarding the runtime performance analysis. We understand the importance of computational efficiency, especially for real-time applications. In Table B in the pdf and Q2 of the Author Rebuttal, we present a detailed comparison of the computational complexity of VQ-VAE, MAE, and VQ-Diffusion modules, highlighting the number of parameters, multiply-accumulate operations (MACs), and inference time. Our results show that the overhead introduced by the MAE, including uncertainty computations, is minimal compared to the significant overhead from the diffusion process. Specifically, MAE adds only 3 ms to the inference time, which is negligible compared to the 955 ms required by the VQ-Diffusion module. Therefore, the primary computational burden arises from the diffusion process rather than the MAE with uncertainty computations. When we chose the diffusion model, we recognized the heavy computational cost but concluded it was more appropriate for solving the under-constrained problem of ego-body pose estimation. To mitigate this heavy computation issue, we selected the VQ-Diffusion method, which denoises in discretized latent spaces and is considered more computationally efficient. Additionally, as shown in Table C in the pdf, our approach allows skipping denoising steps using the reparameterization trick, following methods from [1] and [2]. The results show that our method provides four times faster options, effectively balancing the trade-off between MPJPE and inference time. Recent research, such as [3] and [4], has focused on improving the speed of diffusion inference. We expect that as these advancements continue, the diffusion model's versatility and extendability will become even more beneficial, reducing the cost of the diffusion sampling process. [1] Austin, Jacob, et al. "Structured denoising diffusion models in discrete state-spaces." Advances in Neural Information Processing Systems 34 (2021): 17981-17993. [2] Gu, Shuyang, et al. "Vector quantized diffusion model for text-to-image synthesis." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [3] Zheng, Hongkai, et al. "Fast sampling of diffusion models via operator learning." International conference on machine learning. PMLR, 2023. [4] Yin, Tianwei, et al. "One-step diffusion with distribution matching distillation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. > Q3: Clarification on Uncertainty Yes, aleatoric uncertainty with a sampling strategy is used in our Table 1 and Table 2. To avoid confusion, we will clarify the strategy and the type of uncertainty at the beginning of Section 3.3. > Q4: Qualitative comparisons against state-of-the-art approaches We appreciate your comment on the importance of qualitative comparisons. While we have visualized our qualitative results in a video format, we found that it is not permitted to include links during the rebuttal phase. As an alternative, we have included detailed comparative qualitative results in the PDF file attached to this rebuttal. We hope this additional information provides clarity and supports the quantitative improvements we reported. We also ensure that we will add video comparisons against state-of-the-art methods in the final version. > Q5: Suggestion of concurrent related works Thank you for the suggestion. We will include suggested references in the related work section to ensure comprehensive coverage of concurrent research. > Q6: Mixed Density Network as Aleatoric Uncertainty \begin{array}{l|ccc} \hline \textbf{Methods} & \textbf{MPJPE} & \textbf{MPJVE} & \textbf{MPJRE} \\\\ \hline \hline \text{w/o Uncertinaty}&6.05\pm0.01&30.12 \pm 0.04&4.36 \pm 0.00 \\\\ \text{DSPoser w/ MDN}&5.84\pm0.05&28.34 \pm 0.15&4.72 \pm 0.04 \\\\ \textbf{DSPoser (Ours)}&5.51\pm0.02&24.19 \pm 0.10&4.09 \pm 0.02 \\\\\hline \end{array} Thank you for the interesting idea for uncertainty measurement. We reported the result of the whole pipeline after substituting the head of our MAE with a Mixed Density Network (MDN), setting the number of mixtures \(M\) to 4 for fair comparison. Similar to the calculation of aleatoric uncertainty in our paper, we measure the aleatoric uncertainty of MDN by $\mathcal{U}_{ale}(\mathbf{x}) \approx M^{-1} \sum_i \pi_i\sigma_i^2 (\mathbf{x})$, where $\pi$ is mixture weight. The results show that MDN improves performance compared to our method without uncertainty; however, it shows worse performance compared to the MAE approaches. \begin{array}{cc} \hline \textbf{Methods} & \textbf{MPJPE} \\\\\hline \textbf{MDN} & 13.45 \\\\ \textbf{MAE (Ours)} & 10.85 \\\\\hline \end{array} To investigate the performance difference, we also analyzed the results of temporal completion of hand trajectories. Different from the MAE, the loss of MDN often diverged, so we early stopped the MDN training at 800 epochs, leading to worse MPJPE for hand trajectories constrast to the MAE model trained fro 4000 epochs. > Q7: Typos. We will revise the sentence as suggested. --- Rebuttal Comment 1.1: Comment: We've completed the baseline experiments on the Ego-Exo4D dataset and would like to share the results. \begin{array}{l|ccc} \hline \textbf{Methods} & \textbf{Imputation} & \textbf{MPJPE} & \textbf{MPJVE} & \textbf{Jitter} \\\\\hline \text{AvatarPoser [10]} & \text{Interpolation} & 47.28 & 89.34 & 65.39 \\\\ \text{Bodiffusion [3]} & \text{Interpolation} & 59.81 & 120.12 & 142.32 \\\\ \text{AvatarJLM [34]} & \text{Interpolation} & 43.01 & 61.98 & 54.23 \\\\ \text{AvatarPoser [10]} & \text{MAE} & 24.54 & 62.34 & 44.24 \\\\ \text{Bodiffusion [3]} & \text{MAE} & 22.12 & 53.30 & 93.80 \\\\ \text{AvatarJLM [34]} & \text{MAE} & 21.08 & 45.77 & 39.04 \\\\\hline \textbf{DSPoser (Ours)} & \text{MAE} & \textbf{16.84}\pm0.04 & \textbf{39.86}\pm0.05 & \textbf{19.21}\pm0.04\\\\\hline \end{array} --- Rebuttal 2: Comment: > Clarification on inference setting in the evaluations. Thank you for your feedback on our evaluation protocol. We would like to clarify that our method follows the BoDiffusion protocol, which uses a sliding step of 20. This approach averages the overlapping frames, producing a 40-frame output for each sliding step of 20. In contrast, AvatarPoser and AvatarJLM use a 1-step sliding window, focusing only on the final frame for real-time applications. > Is the inference time per frame or total time for 40 frames? Our inference time is measured based on generating 40 frames. However, unlike AvatarPoser and AvatarJLM, our method's input and output are fixed (40 frames). Therefore, we can answer your question with a 'yes,' but considering the question may be assumed in an online-inference setup, we need to clarify that the inference time listed in the table B also represents the per-frame time in an online setup. > Additional experiments under real-time inference setup. Recognizing that our initial comparison in previous rebuttal was not conducted under fair conditions, we performed additional experiments to ensure a fair comparison with the baselines. \begin{array}{l|c|c|c|cc} \hline \textbf{Methods} & \text{imputation} & sliding step & averaging & \textbf{MPJPE} & \textbf{MPJVE} & \textbf{MPJRE} & \textbf{Jitter} \\\\\hline \text{AvatarPoser [10]} & \text{MAE} & \text{1} & & 9.88 & 62.31 & 5.98 & 37.89 \\\\ \text{BoDiffusion [3]} & \text{MAE} & \text{20} & \text{temporal avg} & 7.35 & 31.33 & 5.47 & 1254.84 \\\\ \text{AvatarJLM [34]} & \text{MAE} & \text{1} & & 7.12 & 37.60 & 5.24 & 16.95 \\\\\hline \textbf{DSPoser (Paper)} & \text{MAE} & \text{20} & \text{temporal avg} & 5.51 & 24.19 & 4.09 & 4.27 \\\\ \text{DSPoser (\\#1) } & \text{MAE} & \text{1} & & 5.87 & 52.38 & 4.31 & 34.12 \\\\ \text{DSPoser (\\#2) } & \text{MAE} & \text{1} & \text{temporal avg} & 5.23 & 21.73 & 3.83 & 5.94 \\\\ \text{DSPoser (\\#3) } & \text{MAE} & \text{1} & \text{4 samples} & 5.68 & 29.48 & 4.23 & 12.98 \\\\\hline \end{array} Before discussing the results, we want to clarify that our primary focus is on introducing two key aspects: (1) the underexplored problem of doubly sparse video data, and (2) a generic multi-stage framework to address such problems. The specific choices within our framework (e.g. VQ-Diffusion, MAE, and the sliding window) were deliberately kept simple to demonstrate the efficacy of the intermittent tracking signal and our multi-stage framework. As reviewers jBAY and 2EAc noted, the computational demands of diffusion models pose concerns for real-time applications. While we provided baseline results to justify our design choices, we want to emphasize that alternatives such as AvatarJLM and AvatarPoser (instead of VQ-Diffusion style pose estimation algorithms used in the submission) are also viable within our multi-stage framework. These alternatives can balance time complexity and accuracy, making them suitable for real-time applications. > Exp. \#1, fair comparison. We first tested our method **using the evaluation protocol of AvatarPoser and AvatarJLM**, with a sliding window step of 1 where only the final output frame (current frame) is used for each step. Our method performed worse in Jitter and MPJVE compared to AvatarJLM but showed better performance in MPJPE and MPJRE. We believe this drop is due to the probabilistic nature of our method, unlike the deterministic approach of AvatarJLM and AvatarPoser. > Exp. \#2, temporal averaging Next, we modified Exp #1 to better utilize the diversity from our uncertainty modeling by **averaging overlapped frames** while advancing the sliding window. This significantly reduced errors across all metrics. However, when it comes to real-time infernece, the improvement doesn't affect to current frames but only applies to **historical frames**, which are not useful for real-time inference. > Exp. \#3, multiple sampling Finally, we implemented a multiple sampling approach, **averaging four samples** for each step using the end frames (current frames). This method outperformed AvatarJLM across all metrics and can be efficiently implemented with parallel processing, resulting in minimal time overhead from #1. In conclusion, we agree with the reviewer 2EAc that the better performance in MPJVE and Jitter of our method is due to the difference in sliding steps. However, based on Exp #2 and Exp #3, the performance drop appears to stem from our framework's ability to generate diverse motions from the same input. This issue can be mitigated by sampling multiple times using parallel processing and averaging the results. --- Rebuttal Comment 2.1: Comment: Dear Reviewer jBAY, We want to express our sincere gratitude for your insightful feedback on our manuscript. As the author-reviewer discussion period is coming to an end, we wonder whether our responses have addressed your concerns? We're looking forward to the further discussion regarding any more questions you might have! Thank you once again for your time and valuable input. Your comments have significantly contributed to improving our work. Best regards, Authors
Summary: This paper presents a new method for ego-body pose estimation from egocentric videos. Compared to previous methods that assume hand tracking signals are always available, this paper focused on the case that hand poses captured intermittently from egocentric videos. To solve this, this paper proposes a two-stage method that firstly does temporal completion for hand trajectory and then spatial completion for full-body poses. Experiments show better performances than selected baseline methods. Strengths: 1. The paper is well-written and very easy to follow. The figures and tables are well presented. 2. I like the point that the authors use the hand poses captured intermittently from egocentric video data instead of assuming dense tracking signals. The two-stage method also sounds reasonable for this case. Weaknesses: 1. It would be interesting to see how the proposed method would compare to the FoV modeling in [1], which focused on the same task setting when the hand tracking signals are intermittent. However, [1] is not discussed in the submission. [1] Jiang et al. EgoPoser: Robust Real-Time Ego-Body Pose Estimation in Large Scenes, arXiv 2023 2. In Table 3, when trained only on dense data, the proposed methods performed worse than previous methods. So, I was wondering whether the model generalizes well to different settings. 3. Metrics related to computational complexity, such as the number of parameters, FLOPs, and inference time, are not provided. This is important to ensure a fair comparison with previous methods, and to see the potential in real-world applications. 4. Sota methods like AvatarPoser and AvatarJLM are only compared in Table 3 but not in Tables 1 and 2. It would be better to have consistency when comparing methods. 5. Some technical details seem missing. For example, what are the lengths of input and output frames? How is an evaluation made during the evaluation (e.g., the step size of the sliding window)? 6. There are no video comparisons provided, which are important for this task. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The authors acknowledged that using diffusion models could limit the usage of real-time applications. What is the motivation for using diffusion models for this task? Some recent papers like [2] even show AvatarPoser performed better than Diffusion models. In Table 3, the previous method, AvatarJLM, also shows better performance than the proposed method. [2] Ma et al. Nymeria: A Massive Collection of Multimodal Egocentric Daily Motion in the Wild, arXiv 2024 2. IMU measurements are mentioned multiple times in this paper. Are they used in the paper? If yes, how? 3. Will the code be published? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitation has been discussed in Section 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers' insightful comments for our paper. > Q1: FoV modeling result of EgoPoser (ECCV 2024). \begin{array}{c|cc|cc|cc}\hline \textbf{Strategies}&\textbf{MPJPE (180°)}&\textbf{MPJVE (180°)}&\textbf{MPJPE (120°)}&\textbf{MPJVE (120°)}&\textbf{MPJPE (90°)}&\textbf{MPJVE (90°)} \\\\\hline \text{EgoPoser}&5.31&39.69&6.07&46.01&6.60&48.25 \\\\ \text{DSPoser (Ours)}&4.80&22.58&5.28&23.13&5.51&24.19 \\\\\hline \end{array} Thank you for bringing EgoPoser (ECCV 2024) to our attention. We were unaware of this work at the time of our submission, and it is encouraging to see parallel interest in ego body pose estimation from intermittent observations. We will ensure to cite this work in the next version of our paper. Regarding contributions, we understand that EgoPoser's main focus is on preparing training data based on field-of-view (FoV) modeling rather than random masking. While we share an interest in FoV considerations, our work offers distinct algorithmic contributions, including: 1. A multi-stage approach to pose estimation. 2. An uncertainty-aware masked auto-encoder (MAE). These aspects of our work were recognized by reviewers as innovative contributions to the field. We believe our approach complements the ideas presented in EgoPoser. Moreover, the experimental data presented in the preceding table demonstrates that our approach surpasses EgoPoser in performance, with notably superior results in scenarios involving a narrow field-of-view (FoV). > Q2: Generalization on Ego-Body Pose Estimation given dense data. Please see the table in Q3 of the Author Rebuttal. The quantitative results of our method are influenced by the performance of the VQ-VAE component. Since Table 3 in the main paper is intended to show the versatility of our framework rather than argue our main contribution, we did not conduct an extensive hyper-parameter search to achieve state-of-the-art performance initially. To address this, we performed hyperparameter tuning on the VQ-VAE to mitigate performance loss attributed to this component. Following these adjustments, we observed improved results compared to other methods. The results demonstrate that while our method may not exhibit the best performance across all metrics, it consistently shows at least the second-best performance for all metrics. A one-to-one comparison with other baselines reveals that our method outperforms them in at least 3 out of 4 metrics. Specifically, we included the Jitter metric to provide a comprehensive analysis of our method, as suggested by Reviewer VMPC. Our method significantly achieves better smoothness, indicating that our approach generates smoother motion close to the ground truth in terms of Jitter. > Q3: Metrics related to computational complexity. Please see tables in Q2 of Author rebuttal, or the Table B and C in the pdf. We recognize the importance of including metrics related to computational complexity. While we have roughly provided the inference time in the supplementary material, we agree that a more detailed comparison would be beneficial. Including metrics such as the number of parameters, MACs, and inference time will offer a more comprehensive comparison with previous methods and better illustrate the potential of our approach for real-world applications. We will incorporate these details in the revised version of our paper. > Q4: Consistency between tables Please refer to table in Q1 of the Author Rebuttal, or Table A in the pdf file. Thank you for highlighting this inconsistency. Following the reviewer's comment, we plan to update the Tables 1 and 2 to include comparisons with AvatarPoser and AvatarJLM with various imputation method, ensuring that relevant state-of-the-art methods are consistently evaluated throughout the paper. However, due to limited time and resources, we are currently only able to provide baseline results on the AMASS dataset. We are working on training baseline methods on the Ego-Exo4D dataset and will update the results as soon as they are completed. > Q5: Missing details Thank you for bringing this to our attention. We realize that some technical details were not clearly outlined in our submission. As described in Supplementary Section A., the window size is set to 40 frames, so the lengths of the input and output frames can be inferred as 40 frames from the explanation in the Preliminary section. However, we acknowledge that this is not clear and easy to catch, so we will explicitly state these details in the main paper. Additionally, we will specify the sliding window size, which is set to 20 frames. Upon a thorough review of the paper, we also noticed that some details of the hand detectors, such as how we handled visibility for the Ego-Exo4D dataset, were not clearly stated. We will ensure that these details, along with the step size of the sliding window used during evaluation, are clearly presented in the main text to provide a complete understanding of our methodology. > Q6: Qualitative comparisons against SoTA Please refer to the Q4 of the Author Rebuttal. > Q7: the motivation for using diffusion models Please refer to the Q2 of the Author Rebuttal. > Q8: How we used IMU information. The Ego-Exo4D dataset, collected by Meta using the Aria device, includes head trajectory data processed from IMU measurements. In the paper, we refer to this head trajectory as the tracking signal from the IMU, which is integral to our analysis and experiments. For the AMASS dataset, while IMU data is not used, we follow the approach of previous works such as AvatarPoser, where aggregated data of joint pose, joint velocity, 6D rotation, and angular velocity mimic the tracking signals from IMU. Both AvatarPoser and Avatar JLM have demonstrated that models trained with these signals are applicable to real-world data from AR devices. > Q9: Lack of publicly available code. Please refer to Q5 of the Author Rebuttal. --- Rebuttal Comment 1.1: Comment: We've completed the baseline experiments on the Ego-Exo4D dataset and would like to share the results. \begin{array}{l|ccc} \hline \textbf{Methods} & \textbf{Imputation} & \textbf{MPJPE} & \textbf{MPJVE} & \textbf{Jitter} \\\\\hline \text{AvatarPoser [10]} & \text{Interpolation} & 47.28 & 89.34 & 65.39 \\\\ \text{Bodiffusion [3]} & \text{Interpolation} & 59.81 & 120.12 & 142.32 \\\\ \text{AvatarJLM [34]} & \text{Interpolation} & 43.01 & 61.98 & 54.23 \\\\ \text{AvatarPoser [10]} & \text{MAE} & 24.54 & 62.34 & 44.24 \\\\ \text{Bodiffusion [3]} & \text{MAE} & 22.12 & 53.30 & 93.80 \\\\ \text{AvatarJLM [34]} & \text{MAE} & 21.08 & 45.77 & 39.04 \\\\\hline \textbf{DSPoser (Ours)} & \text{MAE} & \textbf{16.84}\pm0.04 & \textbf{39.86}\pm0.05 & \textbf{19.21}\pm0.04\\\\\hline \end{array} --- Rebuttal Comment 1.2: Comment: Thank you for your detailed response! While some of my concerns have been addressed, there remains an issue regarding the sliding window size. Previous methods like AvatarPoser and AvatarJLM set the sliding window size to one frame to simulate real-time inference. However, according to the rebuttal, the sliding window size is 20, which could explain the improved smoothness metrics (i.e., MPJVE and Jitter, as mentioned in Q3 of the rebuttal). To ensure a fair comparison with previous methods, I would recommend adjusting the sliding window size accordingly. --- Reply to Comment 1.2.1: Comment: Thank you for your valuable feedback on ensuring a fair comparison. We agree with your concern and have conducted additional experiments on the AMASS dataset to provide a more accurate and fair comparison with the baseline methods. \begin{array}{l|c|c|c|cc} \hline \textbf{Methods} & \text{imputation} & sliding step & averaging & \textbf{MPJPE} & \textbf{MPJVE} & \textbf{MPJRE} & \textbf{Jitter} \\\\\hline \text{AvatarPoser [10]} & \text{MAE} & \text{1} & & 9.88 & 62.31 & 5.98 & 37.89 \\\\ \text{BoDiffusion [3]} & \text{MAE} & \text{20} & \text{temporal avg} & 7.35 & 31.33 & 5.47 & 1254.84 \\\\ \text{AvatarJLM [34]} & \text{MAE} & \text{1} & & 7.12 & 37.60 & 5.24 & 16.95 \\\\\hline \textbf{DSPoser (Paper)} & \text{MAE} & \text{20} & \text{temporal avg} & 5.51 & 24.19 & 4.09 & 4.27 \\\\ \text{DSPoser (\\#1) } & \text{MAE} & \text{1} & & 5.87 & 52.38 & 4.31 & 34.12 \\\\ \text{DSPoser (\\#2) } & \text{MAE} & \text{1} & \text{temporal avg} & 5.23 & 21.73 & 3.83 & 5.94 \\\\ \text{DSPoser (\\#3) } & \text{MAE} & \text{1} & \text{4 samples} & 5.68 & 29.48 & 4.23 & 12.98 \\\\\hline \end{array} Before discussing the results, we want to clarify that our primary focus is on introducing two key aspects: (1) the underexplored problem of doubly sparse video data, and (2) a generic multi-stage framework to address such problems. The specific choices within our framework (e.g. VQ-Diffusion, MAE, and the sliding window) were deliberately kept simple to demonstrate the efficacy of the intermittent tracking signal and our multi-stage framework. As reviewers jBAY and 2EAc noted, the computational demands of diffusion models pose concerns for real-time applications. While we provided baseline results to justify our design choices, we want to emphasize that alternatives such as AvatarJLM and AvatarPoser (instead of VQ-Diffusion style pose estimation algorithms used in the submission) are also viable within our multi-stage framework. These alternatives can balance time complexity and accuracy, making them suitable for real-time applications. > Exp. \#1, fair comparison. We first tested our method **using the evaluation protocol of AvatarPoser and AvatarJLM**, with a sliding window step of 1 where only the final output frame (current frame) is used for each step. Our method performed worse in Jitter and MPJVE compared to AvatarJLM but showed better performance in MPJPE and MPJRE. We believe this drop is due to the probabilistic nature of our method, unlike the deterministic approach of AvatarJLM and AvatarPoser. > Exp. \#2, temporal averaging Next, we modified Exp #1 to better utilize the diversity from our uncertainty modeling by **averaging overlapped frames** while advancing the sliding window. This significantly reduced errors across all metrics. However, when it comes to real-time infernece, the improvement (averaging overlapped frames) doesn't affect to current frames but only applies to **historical frames**, which are not useful for real-time inference. > Exp. \#3, multiple sampling Finally, we implemented a multiple sampling approach, **averaging four samples** for each step using the end frames (current frames). This method outperformed AvatarJLM across all metrics and can be efficiently implemented with parallel processing, resulting in minimal time overhead from #1. In conclusion, we agree with the reviewers that our method's better performance in MPJVE and Jitter is due to the difference in sliding steps. However, as shown in Exp #1, #2, and #3, the performance drop seems to result from our framework's ability to generate diverse motions from the same input. This issue can be mitigated by sampling multiple times using parallel processing and averaging the results.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and effort in helping us improve the paper. We appreciate your acknowledgment of the novelty and valuable suggestions to improve our method. In this rebuttal, we want to clarify a few common questions raised by reviewers. Please note that **experiments** are conducted on the **AMASS dataset** unless otherwise stated. > Q1: The Necessity of Stronger Baselines \begin{array}{l|cccc} \hline \textbf{Methods}&Imputation&\text{MPJPE}&\text{MPJVE}&\text{MPJRE}&\text{Jitter} \\\\ \hline \text{AvatarPoser [10]}&interpolation&40.42&64.07&16.37&27.89 \\\\ \text{Bodiffusion [3]}&interpolation&46.45&75.33&17.99&2793.32 \\\\ \text{AvatarJLM [34]}&interpolation&25.02&68.42&14.14&32.18 \\\\ \text{AvatarPoser [10]}&MAE&9.88&62.31&5.98&37.89 \\\\ \text{Bodiffusion [3]}&MAE&7.35&31.33&5.47&1254.84 \\\\ \text{AvatarJLM [34]}&MAE&7.12&37.60&5.24&16.95 \\\\\hline \textbf{DSPoser (Ours)}&MAE&\textbf{5.51}\pm0.02&\textbf{24.19}\pm0.10&\textbf{4.09}\pm0.02&\textbf{4.27}\pm0.03 \\\\\hline \end{array} Please find the detailed table in the pdf file. We acknowledge that stronger baselines are needed to prove the effectiveness of our methods. Therefore, we implemented the baselines with linear interpolation and MAE imputation. The table demonstrates that our method outperforms these baselines. > Q2: Motivation of Diffusion Model & Its Computational Complexity We appreciate the question regarding our choice of diffusion models despite potential limitations for real-time applications. We have two reasons for choosing a diffusion model for this task: First, we considered the ego-body pose estimation task to be an under-constrained problem, so we chose a diffusion model to leverage the inherent uncertainty of the task. Second, when we first designed our framework, we planned to incorporate multi-modal inputs, such as image features from ego-centric videos and gaze information, in addition to sparse hand data. This integration would enable our model to generate diverse motion sequences conditioned on multi-modal inputs, which are available in the Ego-Exo4D data. We believe that the diffusion model's versatility and extendability make it well-suited for these types of multi-modal integrations. This potential for extension reinforces our choice of diffusion models as a foundational element in our research, providing a flexible and powerful tool for future developments in this field. \begin{array}{c|ccc} \hline \textbf{Module}&\textbf{\\# of Params}&\textbf{MACs}&\textbf{Time} \\\\ \hline \text{VQ-VAE}&17.9 \text{ M}&3.6 \text{ G}&3 \text{ ms} \\\\ \text{MAE}&51.3 \text{ M}&23.3 \text{ G}&4 \text{ ms} \\\\ \text{VQ-Diffusion}&74.2 \text{ M}&1190.2 \text{ G}&958 \text{ ms} \\\\ \hline \end{array} \begin{array}{c|cccc|c} \hline \text{Infer. Steps}&\text{Train. Steps 25}&\text{Train. Steps 33}&\text{Train. Steps 50}&\text{Train. Steps 100}&\text{Infer. Time (ms)}\\\\\hline 25&5.83&5.92&5.69&8.72&278\\\\ 33&-&5.67&5.63&5.58&348\\\\ 50&-&-&5.61&5.53&522\\\\ 100&-&-&-&5.51&1013\\\\\hline \end{array} We appreciate the feedback and recognize the importance of including metrics related to computational complexity. While we have roughly provided the inference time in the supplementary material following the NeurIPS 2024 submission policy, we agree that a more detailed comparison would be beneficial. We will incorporate these details in the revised version of our paper to provide a thorough evaluation of our method's computational complexity. > Q3: Evaluation on Velocity and Accleration. \begin{array}{l|c|cccc} \hline \text{Methods}&\mathbf{y}&\text{MPJPE}&\text{MPJVE}&\text{MPJRE}&\text{Jitter} \\\\\hline \text{GT}&-&-&-&-&4.01 \\\\ \text{VQ-VAE (Paper)}&\text{Full body}&1.26&11.37&1.81&3.93 \\\\ \text{VQ-VAE (Opt'ed)}&\text{Full body}&1.15&10.59&1.67&3.89 \\\\\hline \text{AvatarPoser [10]}&\text{Dense traj.}&4.18&29.40&3.21&13.63 \\\\ \text{Bodiffusion [3]}&\text{Dense traj.}&3.63&\mathbf{14.39}&\mathbf{\textcolor{blue}{2.70}}&493.78 \\\\ \text{AvatarJLM [34]}&\text{Dense traj.}&\mathbf{3.35}&20.79&2.90&\mathbf{\textcolor{blue}{8.39}} \\\\\hline \mathbf{DSPoser (Paper)}&\text{Dense traj.}&3.61\pm0.01&18.36\pm0.03&2.81\pm0.02&4.08\pm0.02 \\\\ \mathbf{DSPoser (Opt'ed)}&\text{Dense traj.}&\mathbf{\textcolor{blue}{3.48\pm0.01}}&\mathbf{\textcolor{blue}{17.86\pm0.03}}&\mathbf{2.68\pm0.02}&\mathbf{4.03\pm0.02} \\\\\hline \end{array} We found that our method shows significantly better performance on the Jitter metric, which is often used to measure the smoothness of motion. Jitter is a measure of jerk calculated by the derivative of acceleration and MPJVE indicates the velocity error as reported in our paper. As seen in the table above and table in Q1, it is demonstrated that our method produces smoother results compared to other methods, closely matching the smoothness of the ground truth, while other methods show significantly higher values on the Jitter metric. > Q4: Qualitative comparisons against state-of-the-art approaches and video visualizations of the results. We appreciate your comment on the importance of qualitative comparisons. While we have visualized our qualitative results in a video format, we found that it is not permitted to include links during the rebuttal phase. As an alternative, we have included detailed comparative qualitative results in the PDF file attached to this rebuttal. We hope this additional information provides clarity and supports the quantitative improvements we reported. We also ensure that we will add video comparisons against state-of-the-art methods in the revised version. > Q5: Lack of publicly available code. We recognize the value of open-source code and plan to release ours upon acceptance. Our organizational policy prioritizes careful review and preparation before any public release. In the meantime, we're committed to providing detailed methodologies in our publications to support reproducibility and further research in the field. Pdf: /pdf/f9a6840674d59196fbc1f00c4f49d8b0cc32b06b.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces an approach for estimating full-body pose from egocentric videos combined with sparse head and hand positions. The key contribution lies in utilizing sparse temporal annotations of hand positions to achieve a complete representation. The method is evaluated on the publicly available Ego-Exo4D and AMASS datasets, demonstrating performance that surpasses current state-of-the-art approaches. Strengths: - **Relevance of the task/scope of the paper:** - The task of estimating full-body pose from egocentric video and sparse positional signals is relevant for the NeurIPS community, and the paper presents this relevance adequately. - **Technical novelty of the approach:** - The approach utilizes both images and sparse information from the hands and head, which is a novel idea. The use of a masked autoencoder to complete the information from the hands is also innovative. - **Technical correctness of the paper:** - The methods section describes the proposed approach clearly. - **Related Work:** - The review of related work is comprehensive, covering all relevant literature. - **Experimental validation:** - The experiments are evaluated on the AMASS and Ego-Exo4D datasets, with a thorough comparison against current state-of-the-art approaches. The quantitative results show improvement over previous methods. - **Writing and presentation:** - The paper is mostly well-written, and the ideas are conveyed clearly despite a few typographical errors. Weaknesses: **Weaknesses:** - **Technical contributions:** - The key contributions could be consolidated into one. The first and second contributions are similar, and the third discusses the potential for AR experiences without providing experiments involving real-life AR devices. - **Experimental validation:** - The paper lacks qualitative comparisons against state-of-the-art approaches and video visualizations of the results. Given the low MPJPE and MPJRE values in current methods, visual assessments are important to ensure the improvements are not just quantitative artifacts. Technical Quality: 3 Clarity: 3 Questions for Authors: **Justification:** Overall, the paper has clear contributions and thorough quantitative comparisons against previous state-of-the-art approaches. However, the claims about the method's use in AR and VR applications could be substantiated better, and there is a lack of qualitative comparisons. **Additional comments:** - There are missing articles (e.g., "the") in multiple instances (L84, L124). - The phrase "using the transformer architecture" in L155 is redundant. - Correct "L161 strategies" and "L45 signals". Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper adresses the limitations. One of the main limitations is the lack of publicly available code, which will hinder progress in the community. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your acknowledgment of our paper's novelty and your valuable suggestions for improving our method. We fully recognize the importance of open-source code in advancing research, as this paper is also built upon the publicly available code of other researchers. > Q1: Key contributions can be consolidated into one & no experiments involving real-life AR devices. We appreciate the feedback regarding the similarity between our first and second contributions. Our primary aim was to highlight different aspects of our approach, but we understand the need for clarity and consolidation. Additionally, while the dataset we used in our experiments, EgoExo4D, is collected using Meta's Aria Devices (an HMD/AR device) and demonstrates the potential for AR applications, we recognize that explicit experiments involving real-life AR devices would strengthen our claims. We will consider this in future work. > Q2: Qualitative comparisons against state-of-the-art approaches and video visualizations of the results. We appreciate your comment on the importance of qualitative comparisons. While we have visualized our qualitative results in a video format, we found that it is not permitted to include links during the rebuttal phase. As an alternative, we have included detailed comparative qualitative results in the PDF file attached to this rebuttal. We hope this additional information provides clarity and supports the quantitative improvements we reported. We also ensure that we will add video comparisons against state-of-the-art methods in the final version. > Q3: Lack of publicly available code. We recognize the value of open-source code and plan to release ours upon acceptance. Our organizational policy prioritizes careful review and preparation before any public release. In the meantime, we're committed to providing detailed methodologies in our publications to support reproducibility and further research in the field. > Q4: Misc. Thank you for pointing out the grammar mistakes and writing issues. Following the reviewer's comments, we will: 1. Conduct a thorough grammar review of the manuscript, including the ones you mentioned. 2. Remove the phrase "using the transformer architecture" in L155. We believe these changes will improve the clarity and readability of our paper. --- Rebuttal Comment 1.1: Comment: The authors responses appropriately addressed my concerns, and after reading the other comments from the reviewers I would like to maintain my initial rating. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review and for confirming that our responses addressed your concerns. We appreciate your decision to maintain your initial rating and value the feedback you have provided. Thank you again for your time and consideration.
null
null
null
null
null
null
Approximately Pareto-optimal Solutions for Bi-Objective k-Clustering
Accept (poster)
Summary: This work develops efficient algorithms to approximate the Pareto-optimal set for different bi-objective clustering problems with solid theoretical guarantees. The problem can have different clustering objectives (e.g., k-separation, k-center, k-diameter, k-median, k-means, k-MSR) and/or different metrics (e.g., different distance measures). Two different types of algorithms are considered in this work to (1) approximate the whole Pareto set for problems that combine k-separation with various k-clustering minimization objectives or combine k-center/k-diameter with a k-clustering minimization problem, and (2) approximate the convex Pareto set (e.g., the convex hull of the ground truth Pareto set) for problems that combine k-median and k-means, where the Pareto set can be exponentially large. Thorough and comprehensive theoretical analyses are provided to support and demonstrate the efficiency of the proposed algorithms. Experimental results show that the proposed algorithms can achieve promising performance on different bi-objective clustering problems. Strengths: + This paper is well written and easy to follow. + The multi-objective clustering problem is valuable and important for many real-world applications. This work would be impactful and inspire many follow-up work on this research direction. + The proposed algorithms and their corresponding theoretical analysis are comprehensive and cover a wide range of different problem settings in a systematic way, which is a solid contribution. + The experiments are well-designed to nicely show the benefits of multi-objective clustering. Weaknesses: I enjoy reading this paper and do not see any obvious weakness of this work. Below are some minor concerns and questions. **1. Centers Chosen from the Point Set** One limitation of the proposed algorithms is that the clustering centers can only be chosen from the point set. I can understand the necessity of this requirement when we hope to approximate all Pareto solutions (up to $O(n^2)$ with this requirement) in subsection 2.1 and subsection 2.2. However, it is not clear whether this restriction is still necessary when we want to approximate the convex Pareto set in section 2.3. What is the challenge for this case if we are allowed to choose the center from an ambient metric space? **2. Extension of the LP Rounding Algorithm** As mentioned in the related work section, Alamdari and Shmoys (2017) leverage the LP rounding algorithm by Charikar et al. to handle the bi-objective clustering problem with k-center and k-median, while this work uses the primal-dual algorithm by Jain and Vazirani (2001). I am curious whether the LP rounding algorithm can be extended to tackle (some of) the other settings considered in this work. In addition to the incomparable approximation factor, what are the pros and cons between the LP rounding algorithm and the primal-dual algorithm used in this work for multi-objective clustering? **3. Generalization to more than Two Objectives** This work focuses on bi-objective clustering, but many real-world applications might involve more than two objectives. What makes it hard to generalize the current methods to handle more objectives for approximating the Pareto set or convex Pareto set? **4. Minimum Cardinality Set** It seems that the proposed methods in this work do not consider minimize the cardinality of the approximate solution set. Can these methods be further extended to find the smallest (minimum cardinality) approximate Pareto set as discussed in Diakonikolas (2011)? **5. Finding the Most Suitable Solution** The number of solutions found by the proposed methods for a given method can still be large. How can we efficiently find the best solution for each application (e.g., the best clustering in Figure 4)? Since the ground truth clustering is unknown, we cannot calculate the quality metric (e.g., Normalized Mutual Information and Rand Index) for each solution. Will the users have to check all the solutions themselves and then choose the most suitable one? In addition, how can the clustering number $k$ be properly chosen for real-world multi-objective clustering problems? **6. Order of the Problem Settings** There is a mismatch between the order of problem settings in the main paper and the appendix. In the main paper, the order or problems is 1) k-separation + k-clustering minimization, 2) k-center/k-diameter + k-clustering minimization, and then 3) combination of k-median and k-means. However, in the appendix, the order is A) combination of k-clustering minimization , and then B) combinations involving k-separation. What is the reason for this choice? **7. Publication Venue** *(This part will not affect my decision.)* This work conducts a comprehensive study on multi-objective clustering, which is not possible to be compressed into a 9 (or 10 in camera-ready) page conference paper. Many important materials (e.g., algorithms, all theorems, and theoretical analyses), which are actually the crucial contributions of this work, have to be put in the supplemental materials. There is also no room left for a conclusion section. I think a journal like JMLR should be a more suitable choice to publish this work. Technical Quality: 4 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: This work has discussed its limitations in the introduction (page 2, objectives subsection), but I think an explicit limitation subsection could be much more helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *1. Centers Chosen from the Point Set* We decided to restrict to this case in the theory part. In 2.1 we could switch to choosing centers from a larger space, e.g., the infinite metric space $R^d$ because this algorithm is still a 2-approximation even if this case. It would make even more sense to switch for k-means (in the practical algorithm in C.1 we actually compute centers from $R^d$.). But e.g. in 2.2, if we would allow $R^d$ for one objective we can not use standard R^d techniques like epsilon-nets unless both objectives use $R^d$. On the other hand, all our results imply results for the infinite case with an additional factor of 2, so we opted against including more case distinctions. We agree that this is not sufficiently discussed in the introduction yet. *2. Extension of the LP Rounding Algorithm* We did not try to adapt the algorithm by Charikar et al. But in the single objective case, it is actually the JV algorithm that is often adapted, while the rounding algorithm (which is actually older than the JV algorithm) involves many intricate steps and we are not aware of many adaptations. For example, the (3+eps)-k-msr-approximation is based on the JV algorithm. We are not even sure if the Alamdari/Shmoys algorithm can be adapted to the case of different metrics which they did not consider (it works for two objectives on the same metric). Which other setting would you be interested to know about? Pro/Cons: The incomparable approximation factor may be even in favor of Alamdari/Smoys. But we expect that our algorithm is better suited for practical purposes. We had positive experiences with implementations of other JV adaptations in the past. We did not write anything about this because we did not practically test it here. Finally, the algorithm by Charikar et al. requires actually solving an LP, while the primal dual algorithm is combinatorial (only the proof needs the LP), which could also be a pro of our algorithm. *3. Generalization to more than Two Objectives* For the sake of simplicity we focus on the combination of two objectives. We believe that the cases where we combine at most one k-median or k-means objective with other (possibly multiple) objectives as k-center/k-diameter/k-separation can be carried out with techniques similar to the ones presented in this work. The combination of two k-median/k-means objectives differs substantially from the other combinations since we are only able to compute an approximation to the convex pareto set. As long as one only uses sum based objectives (k-median or k-means), this approach should also extend to more than two objectives. One may adapt the primal dual algorithm by Jain and Vazirani where we change the objective to a convex combination of multiple k-median/k-means objectives and combine it with (multiple) k-center objectives to compute an approximation to the convex pareto set. From a practical point of view the main downside of computing the pareto set for multiple objectives lies in the large size of the pareto set and therefore also in the high running time. This only seems to make sense when combined with autmatic methods to identify an interesting subset of the approximate Pareto set that we want to compute. *4. Minimum Cardinality Set* This is an interesting question that we did not consider yet. Diakonikolas (2011) and Vassilvitskii, Yannakakis (2005) both show that the smallest approximate Pareto set can be approximated for large groups of problems, namely if you have a polynomial time gap routine (VY2005) or if you can solve a restricted version of the problem in polynomial time (D2011). Unfortunately, those frameworks do not directly carry over to our setting, since we generally only have constant factor approximations for the gap problem. It seems possibly that these frameworks could be adapted to work in this context as well. It would be also interesting to check if our algorithms can be adapted directly to compute smaller appr. Pareto sets for example for the combination of two k-center objectives if one really has to consider all combinations or can cleverly skip certain values that are already 2-approximated. *5. Finding the Most Suitable Solution* Indeed we originally thought of this as an ensemble which is then checked manually. However, it clearly makes sense to combine the algorithms with known techniques to choose k (elbow method, more complicated phase transistioning detections), or to extend these methods to compute an "interesting" subset of the Pareto curve, or to apply other techniques to reduce the size of the approximate Pareto set (also related to question 4). In case of applications C.1, C.2 and C.3 the number of solutions was managable manually. In C.1 we observed a general trend that small separation values improve the solution while enforcing a large separation did not help. It seems that the reason is that k-means++ sometimes splits large clusters in the middle and this is not possible when close points are forced to be in the same cluster. So for practical applications as a rule of thumb one could compute the first few clusterings (which have small separation and small k-means cost) on the Pareto curve and select one of them manually. Only computing this part of the curve of course also reduces the running time. *6. Order of the Problem Settings* This is just an oversight and has no hidden meaning. Thank you for pointing it out! *7. Publication Venue* Thank you very much for pointing out the comprehensive nature of our paper. It is true that it does not really fit into the page limit and we struggled to present all material to the extend that we would have liked to. We chose to sent the paper to NeurIPS since we introduce a new angle to the study of approximation algorithms for clustering. We believe that it would be an interesting contribution to NeurIPS since there are many possible follow-up questions to obtain faster and better approximations of the Pareto curves. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. All my concerns have been properly addressed, so I keep a positive score (7) at this stage.
Summary: This paper presents a novel framework for clustering pareto optimization. This paper studies the approximately pareto-optimal solutions for bi-objective k-clustering problems. The authors focus on the computationally very efficient single linkage / must-link constraints and the computation of pareto-optimal solutions. They first establish that it is not possible to simultaneously approximate k-separation and any of the minimization problems to constant factors. By iterating through all pairs of possible separation and possible radius/diameter, they obtain the approximate Pareto set. The authors also give the results combining k-center or k-diameter with a k-clustering minimization problem, and give the results combinations of k-median and k-means. Strengths: 1. The authors give a novel approximately pareto-optimal approach for bi-objective k-clustering problems.The authors gives the results for combining k-separation with various objectives with (1,\alpha) approximate Pareto set with respect to sep with metric d_1 and f_2 with metric d_2. 2. The paper produces an informative ensemble. These algorithms achieve provable approximation guarantee. The theoretical evidence are solid and the experiments are well-established. 3. The authors verify that the approximate pareto front contains good clustering which cannot be found by considering a single objective. Weaknesses: 1. This paper is not very motivated. The authors seem not explain the motivation for studying the pareto-optimal algorithm for the clustering problem. Moreover, the authors did not give the motivation why the combination of these clustering objectives is studied. 2. The authors seem not conduct experiments on large-scale datasets, such as 100 million points. Moreover, there are also faster version of k-means++ method, such as the rejection sampling method proposed in [1] and random projection based k-means++ method proposed in [2]. The author did not consider comparative experiments with these algorithms. 3. The algorithms have polynomial running time. As far as we know, the pareto optimization of other problems can be done in linear time. 4. Some sections, particularly those involving heavy mathematical notation and proofs, could be made clearer with additional explanations or visual aids. This would make the paper more clear. [1] Cohen-Addad V, Lattanzi S, Norouzi-Fard A, et al. Fast and accurate k-means++ via rejection sampling[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020: 16235-16245. [2] Charikar M, Henzinger M, Hu L, et al. Simple, scalable and effective clustering via one-dimensional projections[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023: 64618-64649. Technical Quality: 2 Clarity: 2 Questions for Authors: Q1: Why do the authors give this definition of pareto-optimal solutions for bi-objective clustering? The authors should explain the motivation of proposing this definition. Q2: The authors focus on the single linkage/ must-link constraints. The authors should explain the significance of these constraints and give the comparison with a single objective. Q3: Can the authors provide more detailed real-world applications where these bi-objective clustering algorithms would be particularly beneficial? Q4: Do these experiments have scalability on large datasets? The authors should give some experiments on large-scale datasets. Moreover, the authors should add comparative experiments with other basic clustering algorithms. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: This paper is mainly a theoretical result, and there is no negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *W1: This paper is not very motivated. The authors seem not explain the motivation for studying the pareto-optimal algorithm for the clustering problem. Moreover, the authors did not give the motivation why the combination of these clustering objectives is studied.* We chose these objectives since they are the most known classical partitional clustering objectives and it seems useful to know about their joined optimization. Indeed, we originally were motivated by hoping for algorithms that could simultaneaously approximate several objectives until we proved that this was not possible. From a practical point, our original motivation stemed from applications C.2 and C.3. In collaborations we observed the need to find clusterings while optimizing over two very different metrics, e.g., one based on Euclidean distances and one based on time series similarity. We believe that such trade-offs are quite natural in practical applications. *W2: The authors seem not conduct experiments on large-scale datasets, such as 100 million points. Moreover, there are also faster version of k-means++ method, such as the rejection sampling method proposed in [1] and random projection based k-means++ method proposed in [2]. The author did not consider comparative experiments with these algorithms.* We did indeed not implement a super fast k-means++ variant. There are several options to do so, but they require additional work to satisfy theoretical guarantees. Notice that it is not sufficient to simply pick a fast k-means algorithm where it is not clear how to bound the second objective. In C.1 we go a middle ground by using k-means++ instead of the primal dual algorithm used in A.3 (this change decreases the approximation ratio but the algorithm still satisfies weaker guarantees). The k-means++ algorithm is state-of-the art for medium sized data. We have speculatd about the case of large data quite a bit and believe that super fast combinations are also possible, but the aim of this paper was an initial (and already pretty page-heavy) collection of first results on the topic. *W3: The algorithms have polynomial running time. As far as we know, the pareto optimization of other problems can be done in linear time.* For clustering problems as k-center and k-means, there are no algorithms which obtain constant factor approximations and have linear running time even in the case where we optimize over a single objective. Therefore we conjecture that it is not possible to compute a pareto set or an approximate pareto set in linear time since it requires solving multiple such problems. Our experiments can be seen as proof of concept that the approaches can be practical and work on medium size data sets. *W4 Some sections, particularly those involving heavy mathematical notation and proofs, could be made clearer with additional explanations or visual aids. This would make the paper more clear.* Answer: We apologize for the more technical nature of the appendix. We agree that the exposition could be better and plan to improve it for a long version of the paper. We hope that this comment did not apply to the main part of the paper. We welcome any additional pointers to sections that were unclear in order to improve them. *Q1: Why do the authors give this definition of pareto-optimal solutions for bi-objective clustering? The authors should explain the motivation of proposing this definition.* We apply the standard definitions for Pareto sets to the setting where the aim is to optimize two clustering objectives. The fact that we also include the possibility to optimize over two different metrics is due to the application in C.3 where this is necessary. *Q2: The authors focus on the single linkage/ must-link constraints. The authors should explain the significance of these constraints and give the comparison with a single objective.* SL may be over-highlighted in the introduction. We actually consider two angles: i) Combining single linkage with other objectives and ii) combining various well-known partitional clustering objectives with each other. For i), our motivation was of a more conceptual nature, to see if we can improve clustering for finding ground truths by including separation constraints into the consideration. ii) is directly motivated by applications where two different metrics are present. "Q3: Can the authors provide more detailed real-world applications where these bi-objective clustering algorithms would be particularly beneficial?" We point out that the data in C.2 and C.3 are from real-world applications. In particular C.3 is a question that is actually studied in geodesy and where two metrics are present over which one needs to optimize. Could you clarify in which aspect more details would be required? It may be that we described the applications a bit too short in the main body of the paper, and we apologize for that. "Q4: Do these experiments have scalability on large datasets? The authors should give some experiments on large-scale datasets. Moreover, the authors should add comparative experiments with other basic clustering algorithms." In C.1 we focus on the question whether the combination of two clustering objectives can improve the quality of a solution. Thus our experiments provide a comparison between the quality of a clustering obtained by k-means++, a Single Linkage clustering, and the best clustering on the pareto curve. In C.2 and C.3 our aim was to highlight the usefulness of bi-objective clustering. We did not optimize the implementation for speed. Improving our implementation speed-wise or extending our guarantees to faster algorithms is an interesting open question. We also think that the computation of the full Pareto set can in most cases be avoided by identifying a smaller set of interesting solutions automatically, but we did not yet test this approach. --- Rebuttal Comment 1.1: Comment: Thank you for your response. However, after carefully considering your points, I prefer to maintain the original score. --- Reply to Comment 1.1.1: Comment: Thank you for your comment. Could you maybe give us a little more detailed response on why you prefer to maintain your negative score? We would be very interested to know what the remaining criticism is.
Summary: The paper introduces novel algorithms to approximate the Pareto-optimal solutions for bi-objective k-clustering problems. The authors focus on combinations of clustering objectives such as k-center and k-means, or k-center with two different metrics. Usually, these objectives are conflicting and cannot be optimized simultaneously, making it necessary to find trade-offs. The algorithms provide provable approximation guarantees and are demonstrated through experiments to yield good clustering that single-objective approaches fail to capture. Strengths: 1. The paper addresses the complex issue of multi-objective clustering. The motivation with k-diameter and k-separation problems accurately captures the difficulty of optimizing multi-objective optimization settings. The work provides a novel solution to approximate the Pareto front for bi-objective k-clustering problems. 2. The authors validate their approach through extensive experiments, showing that the approximate Pareto front includes superior clustering compared to those obtained by single-objective methods. 3. The paper has been written very well with detailed approaches. Weaknesses: 1. It is not entirely clear the main novelty aspect of the work. In the setting of sec 2.1 where k-sep is combined with other objectives, the main takeaway seems to be that the authors were able to integrate existing state-of-the-art guarantees into their framework. Similarly for other sections for Pareto-optimal solutions were discussed, the authors leverage relies heavily on already existing approaches. 2. Most results are either incomparable with respect to the related work, because of bicriteria guarantees or translate to the existing guarantees. The results section should succinctly describe the main technical contributions (even if a few) for a better understanding of technical innovations. Technical Quality: 3 Clarity: 3 Questions for Authors: I would request the authors to clarify the main technical innovations in this work -- e.g., in 229: "The input to the algorithm now consists of clusters instead of single points". Does it require substantial change in techniques? Pareto-optimality makes sense for multi-objective guarantees. But falling back on bi-criteria guarantees also might defeat the purpose. Is there a way to reconcile both and argue about them? The approximate pareto-optimal sets-based approaches rely heavily on existing works. Can you please highlight the challenges or technical innovations in adopting it to the set of problems considered. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Weakness 1: It is not entirely clear the main novelty aspect of the work. In the setting of sec 2.1 where k-sep is combined with other objectives, the main takeaway seems to be that the authors were able to integrate existing state-of-the-art guarantees into their framework. Similarly for other sections for Pareto-optimal solutions were discussed, the authors leverage relies heavily on already existing approaches.* The main novelty is that algorithms with theoretical guarantees for the computation of Pareto sets for clustering have not been studied before (the only known related result was developed in a different framing). We give many results matching the best single-objective approximation ratios, and (nearly) none of this was known beforehand. Given that better bi-objective guarantees would also correspond to better single-objective approximations and most of these clustering problems are well studied, we find it unlikely that we could get any further improvements in these cases. We are aware that at some points our bi-objective algorithms are pretty straightforward adaptions of existing algorithms but we felt that it still makes sense to include these results for completeness. Rather than calling one single algorithm the main result we would consider the sum of all techniques presented in this paper to deal with all the different combinations of objective functions, as well as the given worst case instances and the experiments, our contribution to a natural research question that has only been studied scarcely before. Indeed Sec. 2.1 has less to offer in terms of technical contribution in comparison to the other sections. We put it in front because we like the conceptual contribution as it offers a new angle to the very basic question of how to obtain a meaningful clustering that reconciles two natural ideas (separation and compactness of clusters). *Weakness 2 / Question 1+3 (Technical contribution) I would request the authors to clarify the main technical innovations in this work -- e.g., in 229: "The input to the algorithm now consists of clusters instead of single points". Does it require substantial change in techniques?*, also *The approximate pareto-optimal sets-based approaches rely heavily on existing works. Can you please highlight the challenges or technical innovations in adopting it to the set of problems considered.* The refered line 229 did actually not require much change in techniques. Here is a list of novel contributions of technical nature: - the runtime reduction technique in A.1. The main idea to achieve the improvement was to not calculate an entirely new graph and independent set in every iteration of Hochbaum and Shmoys but rather to modify the already existing datastructures. Via the careful usage of a potential function we were able to prove the reduced runtime of $O(n^3)$ (pages 14-16) - the lower bound for the size of the Pareto set for k-center/k-center in A.2 (page 17) - using nesting for the k-sep/k-median case (Lemma 41/45) - lower bound for the size of the Pareto set for k-median and k-means (Theorem 21) - lower bound examples showing that SL cannot be reasonably approximated together with the other objectives (Ex. 1, Ex. 31, Obs. 36, Obs. 46, ) - the algorithm in C.1 is also novel (but relatively straightforward) Aside from this we believe that collecting many techniques and finding those that can be applied in this setting is a contribution in its own right, like using nesting from hierarchical clustering (Lemma 41/45) or observing that Diakonikolas' framework works well with approximation algorithms. Finally, Sec. C.2 and C.3 offer novel modeling for two real world problems to highlight the usage of bi-objective optimization algorithms. *Q2-Pareto-optimality makes sense for multi-objective guarantees. But falling back on bi-criteria guarantees also might defeat the purpose. Is there a way to reconcile both and argue about them?* As we have shown in Ex. 1, Ex. 31, Obs. 36 and Obs. 46, at least for the combination of the k-separation objective with any other clustering objective we cannot hope for any bi-criteria approximation guarantee even if both objectives are defined on the same metric. Similarly it can be shown that it is not possible to optimize k-means or k-median and k-center or k-diameter at the same time since the k-center/diameter objective could force the algorithm to pick outliers which might decrease the quality of the means/median solution by an arbitrary amount (e.g., combining k-center/diameter with k-means/median, one would construct two point sets of high cardinality close to each other and one single point far away. For $k=2$, kmeans/median has to place one point in each large point set while k-center would spend one center for the outlier.) Thus, bi-criteria guarantees are in some sense the best we can hope for in this setting (at least for polynomial-time algoirthms). We are not sure if we understood the question correctly. If it is about the fact that we have two objectives rather then many, then we instead remark that we are definitely interested in the extension to multiple objectives, and in many cases, it should be possible. But the paper is already very long, so we left including additional technical work for multiple objectives to future work. --- Rebuttal 2: Title: Please respond to the rebuttal Comment: Dear reviewer CrwW, Can you please read and comment on the rebuttal from the authors? Also, since we have a somewhat mixed rating, please take other reviews and responses into consideration. Regards, Area Chair --- Rebuttal Comment 2.1: Title: Rebuttal Comment: Thanks for answering my questions -- they clarified some of the questions I had with regards to novelty. However, I do want to point out to the authors, that the presentation of the paper requires significant reworking to bring out the main technical contributions, for others in the research community to appreciate (or even understand) the technical contributions/results. After reading the rebuttal responses, I'm increasing my score.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for considering our work and providing detailed reviews. Replies to remarks and questions can be found in the individual rebuttals for each review.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Robust Offline Active Learning on Graphs
Accept (poster)
Summary: This paper proposes an offline active learning method that selects nodes to query by explicitly incorporating information from both the network structure and node covariates. This paper establishs a theoretical relationship between generalization error and the number of nodes selected by the proposed method. Strengths: 1. The theoretical analysis is sufficient. 2. Offline graph active learning is important. 3. The proposed method is easy to implement. Weaknesses: I appreciate the author's theoretical analysis, but the experimental section is clearly insufficient. Recent work on graph active learning[1] has been conducted on larger-scale datasets like Arxiv and Products. Conducting experiments solely on datasets such as Cora is not adequate. [1] Partition-based active learning for graph neural networks. TMLR 2023. Technical Quality: 2 Clarity: 2 Questions for Authors: Please provide detailed experimental results for additional datasets. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the theoretical analysis of our algorithm and for your insightful suggestions! We have carefully considered your concerns regarding the insufficient experimental results, and have worked diligently to address them. >**Weakness: Recent work on graph active learning[1] has been conducted on larger-scale datasets like Arxiv and Products. Conducting experiments solely on datasets such as Cora is not adequate.** To demonstrate the scalability of our algorithms on larger-scale datasets, we conducted experiments on the two largest datasets (Co-Physics with n=34,493 and Ogbn-Arxiv with n=169,343) included in your suggested paper [1]. The results of our algorithm and the baseline methods are summarized in **Table 2** of the global response PDF. The greatest improvement is observed in the Macro-F1 score on Arxiv, with a margin as large as 4.8% at 320 labeled nodes. We argue that this is particularly significant given that Arxiv (41 classes) is a class-imbalanced data, where Macro-F1 is a more appropriate metric for evaluation. Inspired by your comment, we carefully examine computational cost of the proposed method. The complexity of our method is $\mathcal{O}(n+m+nm+n^3)$ for a single node query. When dimension of node feature $p<<n$, we can speed up the informative selection via replacing the SVD by Lanczos algorithm to obtain the $p$th largest or smallest eigenvalues and the corresponding eigenvectors. The time complexity of Lanczos algorithm is $\mathcal{O}(pn^2)$ [2]. Then the complexity of the proposed biased sampling method is $\mathcal{O}(pn^2)$ for single node query. This complexity is comparable to GNN-based network active learning methods since GNN in general has complexity $\mathcal{O}(pn^2)$ in single training update [3]. More interestingly, we found that there is no need to store and perform SVD on the $n$-by-$n$ $P_{\mathcal{S}^c} L^k P_{\mathcal{S}^c}$ to obtain its eigenvectors and eigenvalues for node selection. Notice that for the rank-p projection matrix $P_{\mathcal{S}^c} = Z_{\mathcal{S}^c} Z_{\mathcal{S}^c}^T$ where $Z_{\mathcal{S}^c} \in \mathcal{R}^{n\times p}$ is the base of $P_{\mathcal{S}^c}$. Then we can first perform SVD on $p$-by-$p$ matrix $Z_{\mathcal{S}^c}^T L^k Z_{\mathcal{S}^c} = U^T\Sigma U$, and the desired eigenvalues and eigenvectors are $\Sigma$ and $Z_{\mathcal{S}^c}U^T$, respectively. During the process, we only need to store and SVD a $p$-by-$p$ and a $n$-by-$p$ matrix, which can be handled efficiently via GPU-based matrix multiplication even when $n$ is large. We report the computational time of proposed method for one node query on multiple benchmark network data in **Table 3** of the PDF. The time cost of single querying is about 2 second when $n$ is about 170,000. To the best of our knowledge, we did not find offline graph-based active learning methods in the current literature have been tested on the Products dataset (n=1,569,960) . We admit that it is difficult to re-run our method and all benchmark methods on this dataset within the limited rebuttal period. However, we appreciate the reviewer for pointing out this interesting dataset, and we will include these results in the final version of our paper. >**Question: Please provide detailed experimental results for additional datasets.** In addition to larger-scale datasets, we also included additional datasets that cover a wide range of homophily and heterophily levels. Besides the benchmark homophily networks (Cora, Citeseer, and Pubmed), we conducted experiments on two heterophily networks (Texas and Chameleon). We also ran all the competitive baselines on these two datasets, as they were not included in the original papers of any of the baselines. The results are summarized in **Table 1** of the global response PDF. Our algorithm achieves the best performance in Cora and Texas and is comparable to the best baselines in Citeseer, Pubmed, and Chameleon. Moreover, we conducted simulation studies on synthetic networks with three different topologies: small-world property, community structure, and scale-free property. The results are summarized in **Figure 4** of the global response PDF. The proposed algorithm achieved the best performance in all three scenarios under different noise levels. **Summary:** Following your questions, we conducted experiments on additional networks of much greater size, different levels of homophily, and various topologies. The proposed algorithm achieved competitive, if not the best, performance in every category, indicating its scalability, generalizability, and robustness. >**References** [1] Partition-based active learning for graph neural networks. TMLR 2023.\ [2] Golub, Gene H., and Charles F. Van Loan. Matrix computations. JHU press, 2013.\ [3] Wu, Zonghan, et al. "A comprehensive survey on graph neural networks." IEEE transactions on neural networks and learning systems 32.1 (2020): 4-24. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I raised my score to 5. --- Reply to Comment 1.1.1: Title: Thank you for raising the score! Comment: We appreciate your recognition of the updated results and the improved score. If there are any other comments or questions, we would be pleased to discuss and clarify further!
Summary: The paper proposes a strategy for collecting labeled data for a semi-supervised learning algorithm focused specifically on learning on graphs. The paper provides a theoretical analysis of the proposed method, capturing both the quality of the samples that are selected for labeling as well as the the prediction error of the overall learning procedure. Experimental results show that the method the applicability of the method to real-world datasets. Strengths: - The method proposed in the paper, as well as the problem setting are explained clearly. - The paper provides theoretical guarantees for the method that indicate its superiority compared to random sampling (Theorem 2) and characterize the error rate that can be achieved with this active SSL strategy (Theorem 3). Weaknesses: - The empirical analysis does not very convincingly suggest that the proposed method is better than the baselines considered. Moreover, it would be helpful if the figures showed confidence intervals or error bars. - The experimental results only compare with a few heuristics for data collection. It would be informative to consider other AL works proposed in the graph learning literature as baselines. - It is not very clear how the paper is positioned in the literature. It would help to have a related work section that can indicate prior works on AL and active SSL on graphs. - The clarity of the section 3 could potentially be improved, perhaps by reducing the amount of symbols to the ones that are strictly necessary and providing more clearly marked (e.g. with paragraph titles) intuitive descriptions of the steps that need to be taken and the obstacles that need to be overcome. Minor remarks: - lines 19-21: 3 different learning paradigms are mentioned in the first two sentences (active, semi-supervised and transductive learning). It would help if it was clearer early on how they are relevant for the problem that motivates this work. - line 89: undefined symbol $\mathcal{B}$ Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the computational efficiency of the proposed method? - How tight is the upper bound in Theorem 3? Would it be possible to compare it to numbers from simulations on some simple synthetic settings? - How does the method compare to online AL methods for graphs (e.g. [36, 37, 30] etc) or other offline AL methods for graphs? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: While the paper discusses some of the limitations of the proposed approach, it would be good to have a more detailed section on the computation cost of running the method as well as show the impact of various hyperparameters of the method on performance (e.g. m, various properties of the network and the generating process for (X, Y) etc). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback, which greatly improves our paper! We address the reviewer's comments point by point below. >**Weakness 1: empirical analysis does not very convincingly** Thank you for the comment! In the global response PDF, we included additional experiments on networks with various topologies (**Figure 4**) , varying levels of homophily (**Table 1**) and much larger scales (**Table 2**) . The proposed algorithm achieved the best prediction performance on three synthetic networks with different topologies, the large-scale network Arxiv, the homophily network Cora, and the heterophily network Texas. For the other four datasets, it is fair to argue that our performance was also competitive. Given these promising results, we are excited to refine the current algorithm in future work, such as extending it to the online setting, to further enhance its empirical performance on different networks. In the updated numerical results, we have included **error bars** indicating standard deviation following your suggestion. >**Weakness 2: compare with AL baselines** We have included several more SoTA offline methods (RIM, GPT, SPA , FeatProp) and online methods (AGE, IGP) graph active learning methods in the additional experiments on both synthetic and real-world networks. Please refer to **Experiments** in the global response for details. > **Weakness 3: literature review on related works on graph-based active learning** Many graph-based active learning strategies have been proposed based on the principle of maximizing query gain across various information criteria that are defined on the graph domain. The effectiveness of maximizing the graph-domain-based information measurements is generally not guaranteed and challenging to analyze due to quantify complexity of graph signal on graph domain. While complexity measure of binary functions have been proposed for the graph domain [1], its extension to general graph signal with node-wise features remains unclear. Without an analyzable complexity measurement of the labeling function, information maximization may not align with the fastest direction of searching labeling function space. Moreover, most of the existing query information measurements rely on real-time label feedback and are not appliable in offline batch settings. We propose a new active learning method on spectral domain based on graph spectral method. While spectral methods are utilized in graph sampling and signal reconstruction task [2,3], we utilize spectral methods to introduce a well-defined complexity measurement of labeling function and the associated query strategy in the spectral domain. Our method is also related to the active learning for regression problem, where learning performance is guaranteed via sample complexity analysis [4,5]. The most notable solution is to use importance sampling based on statistical leverage scores [6], which has sub-optimal sample complexity. The sample complexity of active regression problems is studied under 𝑙𝑝 norm loss function [7]. Recently, it has been shown that the optimal sample complexity can be linear in terms of the number of regression parameters [8]. Existing methods along this line focus on linear regression [7, 8] or polynomial regression [9], whereas our method extends the theoretical guarantee of active regression learning to graph semi-supervised learning task. > **Weakness 4: better presentation for section 3** We appreciate reviewer's feedback. We have revised the notation system in this paper to simplify and better present the proposed method. In addition, we re-organize the materials and add intuitive discussion to enhance the logic flow and readability. >**Weakness minor**: We thank the reviewer for thoughtful comments. We have re-organized materials in introduction, and clearly define $\mathcal{B}$ as the query budget before line 89. > **Q1: computational efficiency of the proposed method** Please see global response on **computational cost**. > **Q2: tightness of the upper bound in Theorem 3 and empirical validation** With $d$ and $m$ fixed, Theorem 3 implies MSE decays at a rate of $\frac{1}{\sqrt{\mathcal{B}}}$ where $\mathcal{B}$ is the query budget. We demonstrate the relationship between MSE and $\mathcal{B}$ on simulated data in **Figure 1** of the global response PDF. The simulation demonstrates that the order of sample complexity in Theorem 3 matches the empirical results, implying the tightness of the generalization bound in Theorem 3. Please also see **Optimality of sample complexity** in global response. > **Q3: Compare to online and offline AL methods** Please see Weakness 2 and **Experiments** in global response. >**References** [1] Dasarathy et al. (2015) S2: An efficient graph based active learning algorithm with application to nonparametric classification. COLT [2] Gadde et al. (2014). Active semi-supervised learning using sampling theory for graph signals. ACM SIGKDD [3] Shuman et al (2013). The emerging field of signal processing on graphs. IEEE [4] Kiefer & Wolfowitz (1959). Optimum designs in regression problems. The annals of mathematical statistics [5] Chaudhuri et al. (2015) Convergence rates of active learning for maximum likelihood estimation. NeurIPS [6] Mahoney et al (2011). Randomized algorithms for matrices and data. Foundations & Trends in Machine Learning [7] Musco et al (2022). Active linear regression for p norms and beyond. IEEE [8] Chen & Price (2019). Active regression via linear-sample sparsification. COLT [9] Meyer et al. (2023) Near-linear sample complexity for lp polynomial regression. ACM --- Rebuttal Comment 1.1: Title: Follow-up on our response to your feedback Comment: We are very grateful for the time and effort you have devoted to reviewing our work, and we deeply appreciate your insightful, valuable, and encouraging comments. In response to your questions, we conducted additional experiments to compare the proposed method with SoTA offline graph-based active learning methods (RIM, GPT, SPA, FeatProp), and online methods (AGE, IGP). The numerical comparisons are conducted on networks with various topologies and response noise (Figure 4), different levels of homophily (Table 1), and larger scales (Table 2). In these experiments, the performance of the proposed method is either best or close to the best one. In addition, we follow your suggestion to add error bars when presenting the results. We discuss the optimality of the generalization bound in Theorem 3 and illustrate the tightness via simulation in Figure 1. We also provide a detailed discussion of the computational complexity of the proposed method, which is at $\mathcal{O}(pn^2)$ where $n$ and $p$ are the number of nodes and node features, respectively. We sincerely hope our response adequately addresses your concerns and will definitely incorporate these changes in the revised version. We look forward to your feedback with great anticipation.
Summary: The paper addresses the challenge of active learning on graphs where labeling node responses is costly. The authors propose an offline active learning method that selects nodes by incorporating both network structure and node covariates. The method leverages graph signal recovery theories and random spectral sparsification, employing a two-stage biased sampling strategy to balance informativeness and representativeness. Strengths: - The paper introduces a novel offline active learning approach that integrates network structure and node covariates. - The proposed method is validated through extensive experiments on both synthetic and real-world datasets, showcasing its robustness and effectiveness. Weaknesses: - How does the proposed method perform on networks with varying levels of homophily and heterophily? Can it adapt to different types of network structures? - Can the method be extended to online active learning scenarios, where nodes are queried in a sequential manner rather than in batches? - How does the performance of the proposed method compare with state-of-the-art graph neural network-based active learning methods [1] in a broader range of network topologies and noise conditions? - Is there a significant computational overhead associated with the two-stage biased sampling strategy, and how does it impact the scalability of the method for large-scale networks? [1] Focus on Informative Graphs! Semi-supervised Active Learning for Graph-level Classification. Pattern Recognition 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: See above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging our algorithm's novelty and providing insightful feedback! We address your concerns point by point below. ## Weaknesses: >**(A) Performance on networks with varying levels of homophily and heterophily** Thanks for raising an excellent point about the generalizability of our method to networks with different levels of homophily. Our method can be flexible in adjusting the heterophily in node selection. For example, we can construct space $\bf{H}(\bf{X},\bf{A})$ based on eigenvectors of $\mathcal{L}$ corresponding to large eigenvalues. To handle coexistence of homophily and heterophily, we can adjust the space by combining eigenvectors corresponding to either small or large eigenvalues. Current methods have primarily been tested on networks with homophily (e.g., Cora, Citeseer). To address your question, we conducted additional experiments and tested competitive baselines on two networks with strong heterophily (Texas and Chameleon). The results, summarized in **Table 2** of the PDF, show that our algorithm achieves the best performance on Cora (strongest homophily) and Texas (strongest heterophily), and is also competitive on other datasets. >**(B) Extension to online active learning scenarios** The proposed method can be directly extended into sequential query manner and online setting as long as the entire network among nodes to be queried are accessible before the query process starts, which guarantees that the function space $\bf{H}(\bf{X},\bf{A})$ is fixed therefore the label information can be accumulated for signal recovery. The biased sampling procedure can select one node and query its label. With the updated set of labelled nodes, we can target a subspace of $\bf{H}(\bf{X},\bf{A})$ that better fits current labels. In next query iteration, we select unlabelled node maximizing information gain on the identified subspace. Intuitively, we gradually shrink the search space during the sequential query, with the goal to find a small but informative function space to estimate graph single, which facilitates to reduce generalization error. In summary, the proposed method can be extended into sequential learning scenario, and we reasonably conjecture that the performance of proposed method can be further improved under sequential learning scenario. >**(C) Performance with SoTA methods [1] in a broader range of network topologies and noise conditions** Thank you for the suggestion! We would like to clarify that [1] is designed for graph-level classification task, which differs from our method that aims to node-level classification tasks literature. However, [1] is an interesting read, and we will discuss it in our literature review section. To address your question, we consider three topologies using synthetic networks: Watts–Strogatz model for **small world** property, Stochastic Block model for **community structure**, and Barabási-Albert model for **scale-free** property. After generating the network, we simulate observed response $\mathbf{Y}=f+\epsilon$, where $f$ is weighted linear combination of leading eigenvectors and $\epsilon \sim N(0, \sigma^2 I_n)$ with noise level $\sigma^2\in(0.5, 0.6, 0.7, 0.8, 0.9, 1)$. Based on **Figure 4** of the PDF, the proposed method outperforms SoTA offline active learning methods and is robust to noise. >**(D) Computational overhead and scalability for large-scale networks** The complexity of our biased sampling method is $\mathcal{O}(n+m+nm+n^3)$. When the budget of node label query is $\mathcal{B}$, the total computational cost is then $\mathcal{O}(\mathcal{B}(n+m+nm+n^3))$. The main complexity of the proposed sampling method originates from the SVD operation. When dimension of node feature $p<<n$, we can speed up the informative selection via replacing the SVD by Lanczos algorithm to obtain the $p$th largest or smallest eigenvalues and the corresponding eigenvectors. The time complexity of Lanczos algorithm is $\mathcal{O}(pn^2)$ [2]. Then the complexity of the proposed biased sampling method is $\mathcal{O}(pn^2)$ for single node query. This complexity is comparable to GNN-based network active learning methods since GNN in general has complexity $\mathcal{O}(pn^2)$ in single training update [3]. In terms of memory cost, the main cost is to store the $n$-by-$n$ graph laplacian matrix $\mathcal{L}^k$. However, when the network is sparse, $\mathcal{L}^k$ is also sparse given moderate $k$, which can be handled with memory-efficient sparse matrix formats such as Python SciPy package. More importantly, we do not need to store and SVD the $n$-by-$n$ $P_{\mathcal{S}^c} L^k P_{\mathcal{S}^c}$ to obtain its eigenvectors and eigenvalues for node selection. Notice that for the rank-p projection matrix $P_{\mathcal{S}^c} = Z_{\mathcal{S}^c} Z_{\mathcal{S}^c}^T$ where $Z_{\mathcal{S}^c} \in \mathcal{R}^{n\times p}$ is the base of $P_{\mathcal{S}^c}$. Then we can first perform SVD on $p$-by-$p$ matrix $Z_{\mathcal{S}^c}^T L^k Z_{\mathcal{S}^c} = U^T\Sigma U$, and the desired eigenvalues and eigenvectors are $\Sigma$ and $Z_{\mathcal{S}^c}U^T$, respectively. During the process, we only need to store and SVD a $p$-by-$p$ and a $n$-by-$p$ matrix, which can be handled efficiently via GPU-based matrix multiplication even when $n$ is very large. We report the computational time of proposed method for one node query on multiple benchmark network data in **Table 3** of the PDF. The time cost of single querying is about 2 second when $n$ is about 170,000. >**References** [1] Focus on Informative Graphs! Semi-supervised Active Learning for Graph-level Classification. Pattern Recognition 2024\ [2] Golub, Gene H., and Charles F. Van Loan. Matrix computations. JHU press, 2013.\ [3] Wu, Zonghan, et al. "A comprehensive survey on graph neural networks." IEEE transactions on neural networks and learning systems 32.1 (2020): 4-24. --- Rebuttal Comment 1.1: Title: Follow-up on our response to your feedback Comment: We sincerely appreciate the time and effort you've put into reviewing our work, and we are truly thankful for your insightful, valuable, and encouraging feedback. In response to your constructive suggestions on numerical experiments, we have conducted additional experiments on real-world networks exhibiting different levels of homophily and heterophily (Table 1), as well as on synthetic networks with different topologies, including community structure, scale-free properties, and small-world properties (Figure 4). As a further supplement to Figure 4, we perform the proposed method with SoTA offline active learning methods on Erdős–Rényi random graph since many social and biological networks can be modeled via this model. The prediction MSE from different methods on **Erdős–Rényi graph** (n=100, p=0.25) with varying response noise are illustrated in the following table: | Method | $\sigma^2$=0.5 | $\sigma^2$=0.6 | $\sigma^2$=0.7 | $\sigma^2$=0.8 | $\sigma^2$=0.9 | $\sigma^2$=1.0 | |------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------| | D-Optimal | 0.48 &plusmn; 0.03 | 0.69 &plusmn; 0.04 | 0.93&plusmn;0.06 | 1.22 &plusmn; 0.08 | 1.54&plusmn; 0.1 | 1.9 &plusmn; 0.12 | | SPA | 0.61 &plusmn; 0.05 | 0.88 &plusmn; 0.08 | 1.2 &plusmn; 0.1 |1.57 &plusmn;0.13 |1.99 &plusmn; 0.17 | 2.46 &plusmn; 0.21 | | RIM | 0.49 &plusmn; 0.02 | 0.7 &plusmn; 0.02 |0.95 &plusmn; 0.03 |1.24 &plusmn;0.04 |1.57&plusmn; 0.06| 1.94 &plusmn; 0.07 | | GPT |0.58 &plusmn; 0.04 | 0.83 &plusmn; 0.06 |1.13 &plusmn; 0.08 |1.48 &plusmn;0.1 |1.87&plusmn; 0.14| 2.31 &plusmn; 0.17 | | Proposed | **0.45**&plusmn; 0.01|**0.65**&plusmn; 0.02 |**0.89**&plusmn; 0.03|**1.16**&plusmn; 0.03|**1.47**&plusmn; 0.04|**1.81**&plusmn; 0.05| Consistent with the results on other network topologies in figure 4, our method still outperforms other competing methods on Erdős–Rényi graph. Additionally, we discussed the computational overhead of the proposed method in details, and empirically demonstrated the scalability of our sampling algorithm on large-scale networks (Table 3). We sincerely hope our response adequately addresses your concerns and aids in the evaluation of our work. We look forward to further discussions with you. --- Rebuttal 2: Title: Thanks for the response and I will maintain my score. Comment: Thanks for the response and I will maintain my score. --- Rebuttal Comment 2.1: Title: Thanks for the comment Comment: We thank the reviewer very much for the feedback. Please do let us know if there is any effect we can futher make to address your concerns. Thank you!
Summary: The work proposes an offline/batch active learning method for querying labels for nodes of a graph. The setting assumes access to noisy responses for the subset of nodes queried by the active learner. On the theoretical side, gains are shown over random selection, and bounds are shown on the generalization error of the proposed method. On the empirical side, the proposed algorithms is evaluated against random selection and other relevant baselines, and ablation studies are provided. Strengths: - Label efficient learning in structured settings is a fundamental learning problem with numerous applications. - The proposed approach for graph active learning has both theoretical and empirical verified advantages over random selection. - The meta-approach of balancing informativeness and representativeness can be useful in other contexts. - Authors provide empirical evaluation which show superiority of proposed approach, and ablation studies which investigate the role of algorithmic components. Weaknesses: - Missing connections to related theoretical literature on active learning on graphs. E.g. [1, 2] [1] Dasarathy, G., Nowak, R., & Zhu, X. (2015, June). S2: An efficient graph based active learning algorithm with application to nonparametric classification. In Conference on Learning Theory (pp. 503-522). PMLR. [2] Zhang, J., Katz-Samuels, J., & Nowak, R. (2022, June). Galaxy: Graph-based active learning at the extreme. In International Conference on Machine Learning (pp. 26223-26238). PMLR. - Presentation can be made clearer, see suggestions below. Technical Quality: 3 Clarity: 2 Questions for Authors: - What is the significance of assumptions (1) and (2)? (lines 93-94) Specifically, is there a way to evaluate how strong assumption (1) is in practice? Also what is meant by "$\bf f$ is influenced by node covariates $\bf X$"? - Can you shed more light on the trade-off between informativeness and representativeness? Is there a standard way to quantify these? - "Robust" in the title is a bit confusing as it may be read as adversarial robustness, while you only seem to consider random noise. Consider using an alternative like "noise-tolerant". Furthermore, it would be good to state the type of noise (independent, bounded variance) considered early on, in Sections 1 and 2 (e.g. lines 58, 85). - Is there any approximation or other guarantee known for the quality of the proposed greedy approach (line 119) for selecting $\mathcal{S}$ to maximize the threshold frequency? - Presentation of the greedy algorithm can be improved say using an algobox. - Line 127: prescence or abscence - Could you elaborate on the size of $m$ needed in Theorem 2? Can this be empirically estimated? - What is the running time complexity for the proposed algorithm? - Is it possible to empirically verify the tightness/looseness of the theoretical bounds? - Repeated references [2] and [3] Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Authors should elaborate further on the limitations, e.g. assumptions needed for the theoretical results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W.1** We appreciate the reviewer highlighting relevant literature. Both our method and [1] derive relation between the performance of graph-based active learning and sample complexity. [1] quantifies the complexity on the graph domain of network. In contrast, our method examines the complexity in the spectrum domain. Interestingly, both [1] and our theory suggest query budgets scale approximately linearly with the labelling function complexity given a fixed estimation error. Both [2] and our work provide theoretical comparisons with baseline query methods. [2] utilizes bisection algorithm in [1] to show their method achieves better sample balanceness. Our method offers theoretical comparison with random sampling on information gains. **Q.1** We revise the assumption on the graph signal space for clarity. Denote $\mathbf{U_d}$ as the first $d$ leading eigenvectors of the normalized graph laplacian, and $X_i, i = 1,\cdots, p$ as the $i$th node-wise feature vector. We assume that there exists $d$ for the target graph signal $\bf{f}$ such that: $$ \mathbf{f} \in \text{Span}( \mathbf{U_d U_d^T} X_1, \cdots, \mathbf{U_d U_d^T} X_p ).$$ Based on this assumption, $\bf{f}$ depends on both network topology and node features. We compare with the function space considered by graph convolutional networks (GCN). It is known [3] that the function space of GCN can be represented as $U\hat{G}U^TX$ with $\hat{G} = \text{diag}(g(\lambda_1),\cdots,g(\lambda_n) )$, where $(\lambda_i)_{i=1}^n$ are eigenvalues of the normalized graph laplacian, and $g(\cdot)$ denotes polynomial functions. Given the same level of homophily in the GCN space such that $g(\lambda_i) =0, i >d$, we have $$U\hat{G}U^TX \approx \text{Span}( \mathbf{U_d U_d^T} X_1, \cdots, \mathbf{U_d U_d^T} X_p )$$ given $\mathbf{U_dX}$ is full rank. In other words, the proposed function space is almost as large as the GCN space, and can approximate complicated graph signals well. **Q.2** The trade-off can be clearly illustrated by Theorem 3, which can be simplified as: $$\frac{1}{n}\mathbf{E}_Y\| \hat{\mathbf{f}} - \mathbf{f}\|_2^2 \leq \mathcal{O}( \frac{r_d}{\mathcal{B}}) + \mathcal{O}( 1+\frac{r_d}{\mathcal{B}}) \times \text{Bias}, $$ where $\text{Bias} = \big( \frac{1}{n}\sum_{i>d,i\in \text{supp}(\mathbf{f})} \alpha_i^2 \big)$, $r_d$ is the rank of $\mathbf{U_d X}$ , therefore the increasing function of $d$. A large $d$ lowers representativeness among queried nodes, thereby increaseing variance in controlling of condition number ($r_d/\mathcal{B}$), while it reduces $\text{Bias}$ term by including nodes more informative for identifying less smoothed components in $\bf{f}$. We show the trade-off via simulations on **Figure 2** of PDF, which checks MSE under different $d$. Compared to small $d$, both MSE and variance are larger with larger $d$ when query is small, while MSE and variance decrease faster as query increases. **Q.3** We thank reviewer for the thoughtful suggestion! We highlight that robustness is in terms of label noise in both the abstract and introduction, and clearly define label noise in section 2. **Q.4** The approximation guarantee of greedy algorithm can be acheived if the threshold frequency $\omega(S)$ proposed in Theorem 1 satisfies submodularity. A function is submodular if $f(S \cup\{v\})-f(S) \geq f(T \cup\{v\})-f(T) $ if \$S \subseteq T $. Based on [4], if $f$ is submodular, then $f(S) \geq(1-1 / e) \cdot f\left(S^*\right)$, where S is the set obtained via the greedy algorithm and $S^*$ is set as the global maximizer. We can show that $\omega(S)$ is submodular for star, path, and cycle-shaped networks [5]. One can replace the greedy algorithm with the branch-and-bound algorithm, which has stronger approximation guarantee towards global maximization at the cost of higher computational complexity [6]. In practice, we find the greedy algorithm is good enough to maximize the threshold frequency. **Q.5** We improve presentation by using algobox. **Q.6** We revise "prescence" to "presence". **Q.7** We present the a refined lower bound on $m$. To achieve information gain, $m$ needs to satisfy: $$( \frac{n - d_{min} - m}{n - m})^m \times ( \frac{n - d_{min} - m}{n - d_{min}})^{d_{min}} \times \sqrt{d_{min}} < 1,$$ where $d_{min}$ is the smallest node degree in the network. We elaborate on $m$ derived under different $n$ and $d_{min}$ in **Figure 3** of PDF. The results show that $m$ should be larger when $n$ is larger and $d_{min}$ is smaller. In practice, we can run the biased sampling procedure multiple times with different values of $m$, and set $m$ as the largest one that the condition number of covariance matrix $\tilde{X}^T_{\mathcal{S}}W_S\tilde{X}_{\mathcal{S}}$ is less than a threshold, e.g.,10 based on the rule of thumb for a well-conditioned covariance matrix \cite{Applied_regression}. **Q.8** Please see global response. **Q.9** With $d$ and $m$ fixed, Theorem 3 implies MSE decays at a rate of $\frac{1}{\sqrt{\mathcal{B}}}$. We demonstrate the relationship between MSE and $\mathcal{B}$ on simulated data in **Figure 1** of PDF. The simulation demonstrates that the order of sample complexity in Theorem 3 matches the empirical results, implying the tightness of the generalization bound in Theorem 3. Also see global response. **Q.10** We remove repeated references. >**Ref** [1]Dasarathy et al. (2015). S2: An efficient graph based active learning algorithm with application to nonparametric classification. COLT.\ [2] Zhang et al. (2022). Galaxy: Graph-based active learning at the extreme. ICML.\ [3] Wu et al (2019). Simplifying graph convolutional networks. ICML.\ [4] Nemhauser et al. (1978). An analysis of the approximations for maximizing submodular set functions. Math. Prog. \ [5] Chung (1997). Spectral graph theory. Amer. Math. Soc., 1997.\ [6] Morrison et al (2016). Branch-and-bound algorithms: A survey of recent advances in searching, branching, and pruning. Disc. Opt. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I do not have further questions, and retain my current score. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: We appreciate your valuable feedback, which has greatly improved our paper. If you have any further comments or questions, we would be more than happy to discuss and provide clarification.
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their time and insightful feedback. We are encouraged that the reviewers found our work: 1. contributes on significant area with umerous applications (4PW9, hH2h) 2. novel and theoretically solid (4PW9, emFb, hH2h, 7FJA) 3. introduces the trade-off between informativeness and representativeness is useful and generalizable (4PW9) 4. shows superiority on extensive empirical data and synthetic data (4PW9, emFb) 5. easy to implement (hH2h) 6. well present (7FJA) **Main contributions**: - propose a new offline graph-based active learning method integrating both network and node features, and robust to random label noise. - introduce complexity measurement of labeling function and query information gain in the spectral domain. Proposed query strategy aligns with function complexity measurement in the spectral domain, thereby ensuring the learning performance. - derive the generalization error bound for the proposed method, implying the noval trade-off between informativeness and representativeness in node query. - conduct extensive empirical and ablation studies to verify the superiority of the proposed method. We have revised our paper based on reviewers' feedback, which greatly improve our paper. Please find figures and tables in the attached one-page PDF. We summarize major revisions below: **Empirical studies** - compare with multiple SoTA offline and offline active learning methods on five benchmark datasets with varying homophily levels (Table 1) - compare with offline methods on large scale networks (Table 2) - compare with offline methods on synthetic data with different network topologies and label noise levels (Figure 4) **Theory analysis** - disucss and verify the tightness of convergence rate in Theorem 3 via empirical studies (Figure 1) - disucss and verify the trade-off between informativeness and representativeness via empirical studies (Figure 2) - disucss and elaborate the candidate size needed for Theorem 2 (Figure 3) **Computational cost** - analyze the time complexity of the proposed method - investigate running time on benchmark networks with different size and verify the scalability (Table 3) **General discussion** - model assumption - relation with existing methods and theoretical results **Common questions** **Experiments** Based on the reviewers' feedback, we have conducted additional experiments, summarized in the attached PDF as follows: - **Figure 4**: simulation studies on three synthetic networks with different topologies (Small world property, community structure and scale-free property) under different noise level. - **Table 1**: experiments on real-world networks with different levels of homophily (Homophily networks: Cora, Citeseer and Pubmed; heterophily networks: Texas and Chameleon). - **Table 2**: experiments on real-world large-scale networks (Ogbn-Arxiv and Co-Physics). In these experiments, we compared the performance of the proposed algorithm with SoTA offline methods (RIM [1], GPT [2], SPA [3], FeatProp [4]) and online methods (AGE [5], IGP [6]) for graph active learning. The proposed algorithm achieved the best prediction performance on three synthetic networks with different topologies, the large-scale network Arxiv, the homophily network Cora, and the heterophily network Texas. For the other four datasets, its performance was also competitive. We hope the new experimental results justify the theoretical framework of our algorithm and address the reviewers' concerns. **Computational cost** The complexity of our biased sampling method is $\mathcal{O}(n+m+nm+n^3)$ where $n$ is number of nodes, $m\leq n$ is the size of candidate set. When the budget of node label query is $\mathcal{B}$, the total computational cost is then $\mathcal{O}(\mathcal{B}(n+m+nm+n^3))$. The main complexity of the proposed sampling method originates from the SVD operation. When dimension of node feature $p<<n$, we can speed up the informative selection via replacing the SVD by Lanczos algorithm to obtain the $p$th largest or smallest eigenvalues and the corresponding eigenvectors. The time complexity of Lanczos algorithm is $\mathcal{O}(pn^2)$ [7]. Then the complexity of the proposed biased sampling method is $\mathcal{O}(pn^2)$ for single node query. This complexity is comparable to GNN-based network active learning methods since GNN in general has complexity $\mathcal{O}(pn^2)$ in single training update [8]. **Optimality of sample complexity** When the target graph signal has finite complexity over spectral domain, or fast decay on its heterophily components, then the generalization error in Theorem 3 is dominated by the rarte factor $\mathcal{O}(1/\mathcal{B})$, which matches the optimal linear sample complexity in active learning tasks [9,10]. In addition, the MSE converges at the rate $\frac{1}{\sqrt{\mathcal{B}}}$, which is the optimal nonparametric convergence rate in statistics. **References** [1] RIM: Reliable Influence-based Active Learning on Graphs. NeurIPS 2021. [2] Partition-based active learning for graph neural networks. TMLR 2023. [3] A Structural-Clustering Based Active Learning for Graph Neural Networks. ISIDA 2024 [4] Active Learning for Graph Neural Networks via Node Feature Propagation. Arxiv 2021 [5] Active learning for graph embedding. Arxiv 2017 [6] Information gain propagation: a new way to graph active learning with soft labels. ICLR 2022 [7] Golub, Gene H., and Charles F. Van Loan. Matrix computations. JHU press, 2013. [8] Wu, Zonghan, et al. "A comprehensive survey on graph neural networks." IEEE transactions on neural networks and learning systems 32.1 (2020): 4-24. [9] Chen, Xue, and Eric Price. "Active regression via linear-sample sparsification." COLT. PMLR, 2019. [10] Dasarathy, et al. "S2: An efficient graph based active learning algorithm with application to nonparametric classification." COLT, 2015. Pdf: /pdf/87696c8bd0fb18791da160789085ac1afdb7e104.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
From Text to Trajectory: Exploring Complex Constraint Representation and Decomposition in Safe Reinforcement Learning
Accept (poster)
Summary: This paper introduces a type of universal natural language constraint to model diverse real-world safety requirements. Also, to avoid using the specific human-designed cost function, this paper introduces a Unified Trajectory-Level Textual Constraints Translator (U3T) for aligning text with trajectories and assigning costs via attention mechanisms. By conducting experiments on two environments, this paper reaches the conclusion that U3T can accurately predict whether a given trajectory violates the natural language constraint, and the policy trained with predicted cost can behave more safely. Strengths: 1. Broad applicability: U3T introduces trajectory-level textual constraints, capable of modeling diverse constraint requirements in real-world scenarios, making it more widely applicable across different types of constraints and complex environments compared to traditional methods. 2. Novelty: Addressing the natural language constrained reinforcement learning challenge through a novel approach involving multimodal alignment and novelly integrating a credit assignment component within the framework. 3. Automated constraint handling: Through its text-trajectory alignment and attention-based cost assignment components, U3T automates constraint handling and cost allocation, reducing the need for manual design of cost functions and enhancing system flexibility and generality. 4. Empirical results demonstrate that policies trained with U3T achieve lower violation rates compared to standard cost functions. Weaknesses: 1. Complexity and computational cost: Implementing U3T involves complex multimodal learning and attention mechanisms, which may require significant computational resources and time costs, especially when dealing with large-scale datasets or real-time applications. 2. The authors do not specify exactly what model of text encoder was used in the experiments. 3. Data Dependency: Effective implementation of U3T heavily relies on high-quality and well-annotated textual and trajectory data, which may be challenging to acquire and curate in some practical applications. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Is it feasible for the authors to experimentally analyze the inference speed of the proposed framework? 2. Could the authors specify the model of the text encoder used in the experiments and provide an in-depth exploration of its selection rationale? 3. Leveraging large language models for text understanding shows promise. Have the authors considered strategies for integrating these models to augment the comprehension and deconstruction of intricate textual constraints? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have discussed the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the constructive comments. We are grateful to the the reviewer's conductive comments. We will answer the questions below. We believe we addressed all the concerns and are glad to follow up in the discussion phase. ---- **Q1: Inference speed** **Answer:** We perform the trajectory length sensitivity analysis on Hazard-World-Grid. Since our framework is mainly used for reinforcement learning policy training where data is typically provided as input in batches, we counted the inference time for different trajectory lengths with 64 as the batch size, using the hardware device V100-32G. Figure 3 in the rebuttal PDF shows that the average inference time per trajectory is 10ms for trajectories of length 100. This inference time is generally acceptable. --- **Q2: The text encoder we used** **Answer:** Due to its bidirectional encoding capabilities, Bert [1] can better understand the context of words, and in particular, excels in capturing dependencies within sentences. While autoregressive language models such as GPT2 are more suitable for text-generation tasks [2]. So we choose bert-base-uncased as the text encoder. Sorry for omitting the description of text encoder architecture in our paper. We will add it in the final version. [1] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. *arxiv preprint arxiv:1810.04805*. [2] Ethayarajh, K. (2019). How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. *arXiv preprint arXiv:1909.00512*. ---- **Q3: Leverage Large language model** **Answer:** Thanks to the reviewer for the constructive view. Large language models exhibit powerful semantic understanding and task decomposition capabilities, so in the future, LLM could be utilized to decompose more complex constraints into several simple constraints, which can then be handed off to lower-level constraint-obey agents for execution. ---- *We hope these responses address your concerns. We remain open to further discussion.* --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer T5E9 Comment: Thanks for the authors' response. I have no further concern and will lean to keep the score.
Summary: * The paper proposes U3T, a new system for more robust safe RL under general text constraints. * The key innovation of the paper is to generalize text constraints such that the constraints don’t refer to a specific entity/state and addressing cost sparsity where constraints are only violated at the final time step. * The proposed system consists of a trajectory encoder and text encoder that are trained jointly using a contrastive loss defined by the KL-divergence. * The authors solve cost sparsity by identifying actions/states that contribute to the violation of a constraint in an attention mechanism. Strengths: * The paper solves an interesting and intuitive problem. There is a clear application of generalizing text constraints from single entity to general entities. * The formulation is intuitive and well-explained. The components introduced build on a simple and effective design * Ablation studies seem to demonstrate the benefit of the cost assignment portion of their system design, a portion that could be extremely expensive cost-wise. * The general cost (Figure 2/Figure 3) of their method seems to reduce in comparison to their baselines that use ground truth. Weaknesses: * I found some details missing in the text. I discuss some portions that could use clarification here. * Which text encoder was used in this experiment? I couldn’t find this in the paper and from your appendix section A.4, it seems untrained. You refer to finetuning with LORA though so I was confused. A small nit, could be interesting to connect the text encoder performance to results. I realize space is tight but if these could be incorporated into the main paper, that would clarify and make the experiments easier to interpret. * I’m unsure how to interpret the average reward results in comparison to the cost results. See questions. My intuition was that rewards should be larger given the pareto frontier results. * I’m having trouble differentiating this paper from the Lou et. al. paper. I took a quick look at Lou et. al. -- I can understand that entities needed to be modeled in the paper but from what I can tell, that paper could also be applied in the settings you considered given the use of the pretrained LLM. From Figure 1, I couldn’t see the specific entity problem you were mentioning in the paper. Why wasn’t this added as a baseline? Could you provide more discussion on why these two are different? I think the approach in this paper has considerable novelty so I won’t say that this is entirely negative but it does diminish the contribution slightly. Technical Quality: 3 Clarity: 3 Questions for Authors: * I noticed that generally, while the cost of the method would decrease, the reward wouldn’t change significantly (Figure 2). I actually found this a bit unintuitive. I figured that lower cost would imply a longer trajectory and therefore, longer windows to collect reward over, so average reward could increase. I found this discussion missing in the text. Could this be elaborated? * How long were the trajectories used in the paper? (What was the average size of T?) Could there be a context bottleneck? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: * See question about context bottleneck. Could this be a limitation to discuss? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. And we are very pleased that you appreciate the effectiveness and direction of our research. We would like to address your concerns below. --- **W1.1: About text encoder** **Answer:** We used pre-trained bert-base-uncased [1] as a text encoder. During U3T training, we fully fine-tuned the text encoder. The text encoder was fine-tuned again using LoRa during RL policy training. The original U3T, used for predicting costs, was not retrained during RL policy training. A formal description is provided in Algorithm 1. We have added related experiments. We chose three different models: untrained transformer-25M, pretrained gpt2-137M, and pretrained bert-base-uncased-110M. For natural language understanding tasks, bert-base-uncased has the strongest capability, GPT-2 comes second, and Transformer is the weakest [2]. As shown in Figure 3 of the rebuttal PDF, using bert-base-uncased as a text encoder yields the best results, indicating that models with stronger semantic understanding capabilities improve our framework's performance. [1] Devlin, J., Chang, M. W., Lee, K. et. al. Bert: Pre-training of deep bidirectional transformers for language understanding. [2] Ethayarajh, K. (2019). How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. ---- **W1.2&Q1: The reward didn't change significantly.** **Answer:** Safe RL problems can be classified into two categories (more formally defined in [1]). The first category involves cases where maximizing reward and minimizing cost are aligned. For example, in an environment with a ball in the center of a 3D square, the robot must touch the ball without stepping outside the square (Dalal et al. [2]). Here, touching the ball (maximizing reward) aligns with staying inside the square (minimizing cost). This type of problem is simpler, as it reduces to a standard RL problem focused on maximizing reward. The second category involves scenarios where maximizing reward and minimizing cost are not aligned. In these cases, the policy that maximizes rewards is not the one that minimizes costs, leading to a trade-off between reward and cost. This type of problem is more challenging, as avoiding hazards does not provide an intuitive reward gain. Our experimental benchmark falls into the second category. In our environment, hazards often lie between the agent and the target. To reach the goal without violating constraints, agents must take detours and adopt behaviors that ensure safety but do not directly yield rewards. This explains why a low violation rate and extended reward collection time do not significantly boost rewards—the additional time is spent avoiding hazards. In our setup, minimizing costs does not directly imply maximizing rewards. Achieving a lower violation rate with the same rewards is already an improvement. This challenge is also related to the inherent limitations in current safe RL algorithms, which often lack mechanisms for enhancing exploration while ensuring safety. We appreciate the reviewer's unique perspective and will discuss this in detail in the final version. [1] Liu, Z., Guo, Z., Yao, Y., Cen, Z., Yu, W., Zhang, T., & Zhao, D. Constrained decision transformer for offline safe reinforcement learning. [2] Dalal, G., Dvijotham, K., Vecerik, M., Hester, T. et. al. Safe exploration in continuous action spaces. ---- **W2: About Lou et. al. paper** **Answer:** Thank you to the reviewer for recognizing the novelty of our paper. We believe it is the first to enable the modeling of fully generalized textual constraints. Our innovation lies in our unified framework that handles constraints with complex logic involving multiple states, entities, and timesteps. Lou et al.'s work determine constraint violations by calculating the similarity of **single** observations to constraints, limiting them to single-state or single-timestep constraints. This approach cannot model complex real-world constraints, such as "don't drive after drinking," which involves sequence and dependency. Our work, in contrast, addresses global trajectories, allowing us to model a broader range of constraints. We focus on universal textual constraints with complex semantics, which Lou et al.'s approach cannot handle due to their lack of components to model trajectory dependencies and align trajectory semantics with textual semantics. Hence, we did not use their work as a baseline for our research. We conducted comparative experiments in the hazard-world-grid environment to verify our method's accuracy under their task settings. We applied constraints to single states and accumulated costs for violations instead of terminating the episode. Since their training code is not open-sourced, we obtained their experimental results from their paper's figures. The experimental results are as follows: | Method | Avg. Reward | Avg. Cost | | ----------- | --------------- | -------------- | | Lou et. al. | $4.9_{\pm0.2}$ | $4.5_{\pm0.5}$ | | our | $4.92_{\pm0.3}$ | $3.3_{\pm0.4}$ | ---- **Q2: About Trajectory length** **Answer:** The maximum trajectory length in our experiment is 200, with an average episode length of approximately 100, depending on the violation rate. We conducted a trajectory length sensitivity analysis as shown in Figure 2 on Hazard-World-Grid, using AUC to measure prediction accuracy. Initially, increasing the trajectory length improves performance as longer trajectories provide more dependencies. However, beyond a certain point, further increases result in a slight drop in AUC due to the transformer's difficulty in capturing global information, indicating a contextual bottleneck related to the transformer's encoding capability. We will add the description of trajectory length and discuss the context bottleneck in the final version. ---- *We hope these responses address your concerns. We remain open to further discussion.* --- Rebuttal Comment 1.1: Comment: Dear Reviewers z2Bi, We want to make sure that our reviews address your concerns. If you need further clarification, please feel free to contact us. Thank you for taking the time to review our papers. Sincerely, Authors.
Summary: - This work proposes an approach to training RL agents with constraints. The proposed approach learns language embeddings of constraints and embeddings of trajectories. During training, a similarity score is used to align the space of constraint embeddings and the space of trajectory embeddings. - Moreover, the language embedding of the constraint is also used to do fine-grained credit assignment across the actions in a trajectory. The authors find that this improves performance. Strengths: - The approach is novel in that the technique of credit assignment is applied to the constraint rather than the reward. Or at least this is the claim, and I was not able to find otherwise. - The approach is shown to be effective. With an ablation study, the authors show that their credit assignment approach gives better performance - I have questions about the setup and novelty (see weaknesses), but overall this work seems to represent a positive contribution: the authors show that it is possible to encode trajectory-level constraints with text and to train an RL agent to respect those constraints. Weaknesses: - I have a hard time understanding where this work stands in relation to prior works. The main contribution seems centered around the approach of encoding trajectory level constraints using natural language. But both trajectory level constraints and natural language constraints have been explored independently. And the combination of these two aspects seems straightforward: encode the trajectory level constraint using natural language. I may be overlooking something that makes this less straightforward. I don't quite follow the explanation for novelty on lines 101-104. The Related Works section mentions prior work which learns language representations of constraints, but says that these works do not consider trajectory-level constraints. But because the constraints are given in natural language, it doesn't seem that anything in principle prevents these prior approaches from doing so? What is the technical innovation of the proposed work that allows for trajectory level constraints? - A few things seem missing from the setup (see questions) - I have concerns about the soundness of the described approach. Where does the ground truth come from when training the text-trajectory alignment component? As far as I can tell, there is notion of a textual constraint but not a notion of a ground truth verification of a constraint. From what I understand of appendix 1, the textual constraint is determined to be violated by a discriminator model (see questions). In any case, the authors should explain how constraints are determined to be violated. This matters because it means that the discriminator model and the agent could both make the same systematic linguistic errors. - I have concerns about the soundness of the approach in general. Using natural language to describe constraints seems at odds with the enterprise of making RL more "safe." Natural language can be ambiguous, and machine learned representations of language can be imperfect, and prone to making semantic errors. In contrast to other approaches which provide formal guarantees on behavior [1], this seems much less safe. However, I am conscious that others in the field think differently, given the prior work. ## minor - Figure 3: Labels are missing in the legend - Line 297: "Table" --> "Figure" ## references [1] Fulton, Nathan, and André Platzer. "Safe reinforcement learning via formal methods: Toward safe control through proof and learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32. No. 1. 2018. Technical Quality: 1 Clarity: 2 Questions for Authors: - Line 174: What is a "negative pair"? Is it a constraint and a trajectory that violates that constraint? - Line 175: Given that the mapping from trajectories to constraints is many-to-many, doesn't $q$ need to be normalized so that the probabilities sum to 1? That is, there are many positive pairs that contain the same constraint, so they should all receive an equal share of positive probability. - Fig 5: I think I'm missing something basic: how are is the Pareto front determined? I could assume that per cost, the best reward possible can be determined by some procedure? Is this what was done? - Section 6: Where do the textual constraints come from? Are they human annotated? Appendix 1 seems to suggest not. - Is it guaranteed that the discriminator can tell when a textual constraint is violated? - Line 289: What is the difference between ground-truth mode and cost prediction mode? Is it the cost assignment? Confidence: 3 Soundness: 1 Presentation: 2 Contribution: 2 Limitations: Limitations are mentioned in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reviewing our submission. We address each point of your concerns separately below. ----- **W1: Relation to prior works** **Answer:** We appreciate the reviewer’s constructive question. See the overall response about the **relationship to previous work**. We will elaborate on the above discussion in the final version. ---- **Q1: About negative pair** **Answer:** In our formal setting, "negative pair" means an unmatched pair of constraint and trajectory, that is no violation of this textual constraint in the trajectory. ---- **Q2: Probability normalization** **Answer**: Sorry for causing the misunderstanding. We do normalization in our experiments: ```label_probs = F.softmax(label, dim = 1)``` We will rewrite the loss function part for clarity in the final version ---- **Q3: About Pareto front** **Answer:** During RL training, we collected about 200 policies and evaluated them on 50 new episodes for average cost and reward scores. We follow the definition in [1]. To determine the Pareto front, we compared each policy with others. A policy is not on the Pareto front if another policy has equal or better rewards and equal or lower costs. If no such policy exists, the current policy is part of the Pareto front. So for a given cost, the policy with the highest reward may not be on the Pareto front if another policy exists that outperforms it in both reward and cost. [1] Wikipedia contributors. (2024, June 12). Pareto front. In *Wikipedia, The Free Encyclopedia*. ---- **Q4.1: Where do the textual constraints come from** **Answer:** To ensure the generation of textual constraints is both professional and effective, we implemented the following steps: 1. We developed a descriptor function to systematically identify all hazards encountered at every timestep when random policy explores the environment. We identified the hazards by checking the system state and storing them in a structured text format. For example: ``` hazards = { "timestep":{ 5: "step into chemical area", # at 5th timestep, the agent step into chemical area 12: "step on the grass", 20: "step into the water", 30: "step into the lava" } } ``` 2. We then constructed various logical combinations of different hazards. And combination format depends on the type of constraint we want to generate. Here we take the mathematical constraint as an example, since it was checked that the agent touched four hazardous items, we flexibly assigned a lost HP value to each hazard and used the total amount of HP lost in the trajectory, as the total HP: ``` combination = { "constraint type": "Mathematical", "HP lose":{ "chemical area": 3, # step into chemical area will lose 3 HP "lava": 2, "grass": 1, "water": 0 } "total HP": 6 } ``` 3. We engaged several researchers to define a large number of constraint templates in diverse language styles, which were used to rephrase the logical combinations into unstructured natural language. These steps enabled us to generate a series of textual constraints, each corresponding to a structured combination. In practical applications, generating textual constraints could be simplified as follows: - Instance 1: Researchers directly describe constraints based on video/image demonstrations of a trajectory. - Instance 2: Gather videos with dangerous scenes and textual commentary, convert these into trajectories and natural language constraints, and use U3T to train agents to adhere to real-life constraints. ---- **W3&Q4.2: Soudness of discriminators** **Answer:** Sorry for the misunderstanding due to the omission of the detailed description of the discriminator. We manually designed (not trained) complex discriminators for various types of constraints which are absolutely accurate. Each natural language constraint corresponds to a structured logical combination (mentioned in the Q4.1). Human-designed discriminators assess whether a trajectory violates the logical combination, ensuring accuracy and the soundness of our approach. We will provide further details in the final version. It's important to note that this complex discriminator is not needed for the practical application of U3T; it was designed solely to evaluate U3T's performance in our study. ---- **Q5: Difference between ground-truth mode and cost prediction mode** **Answer:** As we mentioned in Section 6.1 of the paper, in the ground-truth mode, policies are trained with human-designed constraint discriminators, which will give accurate cost at the timestep when the constraint is violated. In the cost prediction mode, there are two types of cost, the constraint violation cost and the assigned cost, they are all predicted by U3T. The predicted cost function is given by Equation 13 in the paper. ---- **W4: The soundness in general** **Answer:** The reviewer raises an interesting point. Our work emphasizes the flexibility and generalization of language. We propose a modeling paradigm for safe reinforcement learning (RL) that integrates future advancements in natural language and series modeling, enhancing RL safety over time. Rapid adaptation and flexibility can be crucial in dynamic environments, where natural language constraints may be more effective than predefined formal ones. We align agents' behavior with human intentions similar to OpenAI's RLHF [1]. Just as defining alignment safety in large language models is challenging, our method also struggles with providing a clear definition of textual constraint safety boundaries. So we focus on providing a prototype for safe RL with textual constraints and aim to develop a more rigorous definition of textual constraint safety in future work. [1] OpenAI. ChatGPT: Optimizing Language Models for Dialogue, 2022. ---- *We hope these responses address your concerns.* --- Rebuttal Comment 1.1: Comment: Dear reviewer cY35, We hope this message finds you well. We want to check if our reviews address your concerns. If you have any further questions or if there’s anything you’d like to discuss, please feel free to comment further. Thank you for your time again. Sincerely, Authors.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for their time spent reviewing our paper and are grateful for the endorsement on **novelty** ("the approach is novel in that the technique of credit assignment is applied to the constraint" - cY35, "the paper solves an interesting and intuitive problem" - z2Bi, "through a novel approach" - T5E9) and **effectiveness** ("the approach is shown to be effective" - cY35, "effective design" - z2Bi, "achieve lower violation rates" - T5E9). --- Here is our unified response to the common questions regarding the **relationship to previous work:** in our paper, a trajectory-level constraint involves complex logic across multiple states, entities, and timesteps. Prior works focused on simpler constraints related to single states or timesteps, which limits their ability to model complex safety requirements. Despite constraints being provided in natural language, previous methods fail to offer a unified framework for trajectory-level textual constraints due to the lack of a unified representation of trajectory dependencies and the inability to align trajectory semantics with natural language.The novelty of our work lies in the unified understanding and application of the universal constraints. We align the factual logic in the global trajectory with the semantic logic in the text without requiring manual encoding or separate models for each type of constraint. We achieve this by utilizing **the supervision inherently present in the natural language**. We provided individual answers to every reviewer. We will adapt the paper based on all reviewers' insightful comments and questions. And we are happy to follow up in the discussion phase. ---- **Rebuttal pdf** We provided additional experimental results to answer some of the reviewers' questions: - Figure 1: question about text encoder from reviewer z2Bi - Figure 2: question about trajectory length from reviewer z2Bi - Figure 3: question about inference time from Reviewer T5E9 Pdf: /pdf/bf5f80fe9f54d9db26438f5605ccfa8d1ba1064b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
From Similarity to Superiority: Channel Clustering for Time Series Forecasting
Accept (poster)
Summary: The paper introduces the Channel Clustering Module (CCM), a novel approach to enhance time series forecasting models. CCM addresses the limitations of traditional Channel-Independent (CI) and Channel-Dependent (CD) strategies by dynamically clustering channels based on their intrinsic similarities. This approach allows the model to balance individual channel treatment with capturing essential cross-channel dependencies, leading to improved forecasting performance. CCM is adaptable to various time series models and demonstrates its effectiveness through extensive experiments on multiple real-world datasets. Strengths: - The originality of CCM lies in its novel approach to channel clustering for time series forecasting, addressing the limitations of existing strategies. - The quality of the work is evident in the well-designed experiments and the clear presentation of results, showcasing the effectiveness of CCM across different datasets and models. - The paper’s clarity in explaining the concept, methodology, and results enhances its readability and understanding. Weaknesses: - The paper could benefit from more detailed discussions on the selection of similarity metrics and the impact of hyperparameters on performance. - The computational efficiency of CCM, especially in large-scale applications, is not extensively discussed. Technical Quality: 2 Clarity: 3 Questions for Authors: - Where is the official code? - In terms of thinking, the principal component analysis method is actually similar to the proposed method. Please provide a detailed explanation. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The limitation of the Channel Clustering Module (CCM) outlined in the paper includes its scalability to extremely large datasets and the computational overhead introduced by the clustering and embedding processes. While CCM shows improvements in forecasting, its efficiency in real-time forecasting scenarios with limited computational resources remains to be optimized. Additionally, the clustering and embedding processes in CCM introduce additional computational overhead, which could be a concern in scenarios where computational efficiency is critical. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and positive comments. We address the potential concerns as follows. >The computational efficiency of CCM, especially in large-scale applications, is not extensively discussed. As discussed in Sec.4.3, the computational complexity of CCM scales linearly with the number of channels $C$, which is computationally efficient for real-world deployment. Our extensive experiments, including large-scale datasets such as traffic (15,122,928 observations) and stock (13,900,000 observations), provide strong evidence for CCM's promising scalability on extremely large-scale datasets: 1) CCM consistently **reduces model complexity for CI models** (e.g. DLinear and PatchTST), regardless of dataset scale or size (Fig. 5); 2) The **linear scaling w.r.t channel count** enables efficient handling of high-dimensional data, crucial for many practical applications; and 3) our experiments reveal that **the performance gains achieved by CCM are maintained across different dataset sizes**, suggesting that the method's benefits are robust and not limited to specific data scales. This demonstrates the efficiency and scalability of CCM, making it promising for extremely large-scale real-world deployments. Furthermore, we've identified several strategies to optimize the scalability of CCM, including leveraging parallel and distributed frameworks for cluster assigner training and applying sampling on time series similarity computation to optimize computational resources. Algorithmic optimizations, such as efficient and fast attention with linear time complexity further support CCM's scalability. However, we would like to emphasize that these techniques are compatible and orthogonal to the contribution of this manuscript and require substantial additional research that would expand beyond our current scope. We appreciate your suggestion and will expand our discussion on computational efficiency in the revised version. >Where is the official code? As we mentioned in Line677 (Appendix C.3), the code is available at the following anonymous link: https://anonymous.4open.science/r/TimeSeriesCCM-4E83. We will have an official GitHub repository once the paper is accepted. >The principal component analysis method is actually similar to the proposed method. Please provide a detailed explanation. PCA and the proposed CCM serve fundamentally **different purposes and operate on distinct principles**. PCA is primarily a dimensionality reduction technique that transforms high-dimensional data into a lower-dimensional space, preserving maximum variance. It does not inherently cluster data but rather restructures it into orthogonal components. In contrast, CCM is specifically designed for time series forecasting, focusing on grouping channels based on their intrinsic similarities to enhance prediction accuracy and interpretability. CCM dynamically clusters channels into cohesive groups, leveraging these similarities to capture complex inter-channel dependencies. It enhances forecasting by allowing models to treat clusters as coherent entities, thus improving both individual channel fit and cross-channel interactions. Essentially, while PCA emphasizes variance capture and dimensionality reduction, CCM prioritizes similarity-based clustering to optimize time series analysis and forecasting. >Detailed discussions on the selection of similarity metrics and the impact of hyperparameters on performance would benefit the paper. The selection of similarity metrics and other alternatives are discussed thoroughly in Appendix A.1. We also conducted ablation studies on cluster ratio and look-back window length to investigate their impact on model performance. Please refer to Appendix D for detailed results and analysis. We hope that the above clarification improves your confidence in our work. Let us know if you have any further questions/concerns. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. The clarifications provided have mitigated some concerns. --- Rebuttal 2: Title: We would like to hear from Reviewer SRyH Comment: Dear Reviewer SRyH, As the discussion period is close to end, we would like to follow up and ensure that our responses have adequately addressed your concerns. We sincerely appreciate your comments and suggestions, which have significantly contributed to improving our paper. In response to your valuable feedback, we have added more discussion on the model efficiency and clarified the difference between PCA and our proposed CCM. We would be more than happy to further discuss if there are any remaining questions. Thanks again for your time and consideration. Regards,\ Authors --- Rebuttal Comment 2.1: Comment: I will complete my response promptly. Thank you for the reminder.
Summary: Time series forecasting has been a topic of interest, with previous studies exploring different strategies. The Channel-Independent (CI) strategy treats channels individually, improving forecasting performance but lacking generalization and ignoring channel interactions. On the other hand, the Channel-Dependent (CD) strategy combines channels indiscriminately, leading to oversmoothing and reduced accuracy. A channel strategy is needed that balances individual treatment and essential interactions. Based on the correlation between performance and channel mixing, a novel Channel Clustering Module (CCM) was developed. CCM groups channels with intrinsic similarities and utilizes cluster information, combining the advantages of CD and CI. Experimental results show that CCM enhances the performance of CI and CD models, enables zero-shot forecasting, and improves interpretability of complex models by uncovering intrinsic patterns among channels. Strengths: 1. The proposed model-agnostic method CCM achieves optimal performance between single-channel and cross-channel modeling, and it can be integrated into existing time series prediction models to enhance their performance. 2. By learning prototypes from clusters, CCM facilitates zero-shot forecasting on unseen samples, whether in univariate or multivariate scenarios. 3. The author integrated CCM into four mainstream time series prediction models on multiple different datasets. The experimental results demonstrate that in most cases, CCM can bring about significant performance improvements. Weaknesses: 1. The experimental section involves a limited number of baseline methods, for example, SOTA LLM-based time series prediction models [1, 2] were not selected. 2. I noticed that CCM introduces additional model complexity, as it has an independent Feed Forward layer for each cluster. When the value of K is large, this may result in an excessive number of Feed Forward layers, leading to a significant increase in space complexity. On the other hand, the time complexity of CCM is linearly related to K and C. For certain datasets with a higher number of channels (e.g., Traffic), CCM may noticeably increase the time complexity of the base model. Some of the results in Figure 5 of the Appendix hint at this issue. 3. According to Table 14, the improvements brought by CCM to the base model are sometimes overshadowed by the perturbation caused by randomness. This may indicate that CCM has certain limitations. [1] Zhou, Tian, et al. "One fits all: Power general time series analysis by pretrained lm." Advances in neural information processing systems 36 (2023): 43322-43355. [2] Jin, Ming, et al. "Time-llm: Time series forecasting by reprogramming large language models." arXiv preprint arXiv:2310.01728 (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the author provide the percentage increase in runtime and memory usage after integrating CCM for each method on the Traffic dataset? 2. Can a certain hyperparameter analysis be conducted for the parameter β (i.e., β ∈ {0, 0.25, 0.5, 0.75, 1.0}) to demonstrate the impact of the Cluster Loss on model performance? 3. Can the author provide a significance test (i.e., p-value test) for the experimental results, especially for cases where the values in Table 4 are close? As in some cases, the improvements are minimal, and the observed performance gains might be due to randomness rather than the effectiveness of CCM. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author elaborates on the limitations of their work. The main limitations are as follows: CCM does not outperform CI/CD in a few cases in the long-term forecasting benchmark; CCM’s scalability to extremely large datasets remains to be tested. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and positive comments. We address the potential concerns as follows. >Some LLM-based models were not selected as baselines. After careful consideration, we've decided not to include LLM-based time series models in this manuscript for several reasons. Firstly, many LLM-based models treat multivariate time series as independent univariate series, which doesn't align with our core assumption about the importance of cross-channel interactions [1,2]. Secondly, some LLM-based methods that concatenate variates into a single long series [3] are incompatible with CCM's clustering approach, as the clustering will destroy the attention calculation along the single univariate time series. While we acknowledge the growing importance of LLM-based methods in time series forecasting, investigating clustering effects on these models requires substantial additional research that would expand beyond our current scope. We believe this warrants a separate, dedicated study. Our focus on traditional deep learning models allows for a thorough analysis of CCM's effectiveness within an established framework. We are more than happy to add discussions in our revised version. [1] One fits all: Power general time series analysis by pretrained lm\ [2] Time-llm: Time series forecasting by reprogramming large language models\ [3] Unified Training of Universal Time Series Forecasting Transformers >CCM introduces additional space and time complexity. CCM’s scalability remains to be tested. As demonstrated in Sec 4.3, CCM's computational complexity scales linearly with the number of channels $C$, ensuring efficient real-world applications. Our extensive experiments provide strong evidence for CCM's promising scalability on extremely large-scale datasets: - CCM consistently reduces model complexity for CI models (e.g. DLinear and PatchTST), regardless of dataset scale or size (Fig. 5). - The linear scaling w.r.t channel count enables efficient handling of high-dimensional data, crucial for many practical applications. - CCM's performance gains persist across various dataset sizes, indicating robust benefits independent of data scale. These findings collectively underscore CCM's efficiency and scalability, making it a highly promising approach for large-scale real-world deployments. >The improvements brought by CCM may sometimes be caused by randomness. Can the author provide a significance test? We conducted a significance test using p-values for the ETTh1 dataset in Table 4, which represents the scenario where CCM shows a minimal performance boost among our experiments. The p-value test results are as follows (values lower than 0.05 are in bold): |ETTh1|96|192|336|720| |:-:|:-:|:-:|:-:|:-:| |TSMixer |-| **0.0046** |**0.007**|**0.011** | | DLinear |**0.034** |0.093|**0.041** | **0.009** | | PatchTST |-|**1.12E-05** |**0.0004** |**0.002** | | TimesNet |0.089|**0.0019**|**0.043**|0.064| CCM's improvement effect is significant in general. Even in this "worst-case" scenario for CCM (i.e., ETTh1 dataset), we observe significant improvements across multiple forecast horizons and base models. Specifically, the majority of the p-values (11 out of 14) are below the conventional significance threshold of 0.05, indicating statistical significance. Despite a few cases where the p-values are above 0.05, the overall trend strongly supports the effectiveness of CCM. We will include a more comprehensive significance test in our revised version. >The percentage increase in runtime and memory usage on the Traffic dataset? We provide the percentage increase in model size and runtime on the Traffic dataset based on TimesNet and TSMixer (two CD models) as follows. $\Delta$ Param (%): | BaseModel / #clusters |2|3|4|5| |-|:-:|:-:|:-:|:-:| |TimesNet| 0.052 | 0.088 |0.125 | 0.162 | |TSMixer| 9.845 |18.420|26.995 |35.570| $\Delta$ iter. time (%): | BaseModel / #clusters |2|3|4|5| |-|:-:|:-:|:-:|:-:| |TimesNet|2.907 |3.632|5.291|5.777| |TSMixer| 25.746 | 41.350| 53.352 | 64.462| We observe that CCM represents a minimal increase in memory usage and runtime on TimesNet, demonstrating its parameter efficiency when applied to TimesNet. For TSMixer, there is a greater increase compared to TimesNet. However, it's important to note that TSMixer is inherently a more compact model. The absolute increase in parameters remains relatively small and reasonable. It's worth noting that these increases should be considered in the context of the performance improvements that CCM brings, which justifies this modest increase in model size. Moreover, when applied to CI models, (eg, DLinear and PatchTST), CCM consistently reduces the model complexity and runtime, further highlighting its versatility and efficiency. >A hyperparameter analysis for the parameter $\beta$? We conducted a hyperparameter analysis for $\beta$, ranging from 0.1 to 2.0 on ETTh2 dataset (H=720) as follows. |$\beta$ |w/o CCM|0.1|0.3|0.5|1.0|2.0| |-|:-:|:-:|:-:|:-:|:-:|:-:| |TSMixer | 0.445±0.006 | 0.442±0.006 |**0.438±0.003**|0.442±0.009|0.439±0.005 |0.440±0.007| |DLinear | 0.601±0.008 | 0.537±0.013 |**0.499±0.012**|0.501±0.010|0.510±0.008|0.524±0.013| |PatchTST | 0.381±0.005 | 0.381±0.004 | 0.379±0.004 |**0.378±0.007**| 0.379±0.004 |0.381±0.009| |TimesNet | 0.462±0.009 | 0.460±0.006 | **0.457±0.003** |**0.457±0.003**| 0.460±0.004 |0.468±0.008| We observe that the introduction of CCM (with appropriate $\beta$) consistently improves performance compared to the baseline without CCM. The optimal $\beta$ value varies slightly across different architectures, but generally falls in the range of 0.3 to 0.5. Moreover, we observe that TSMixer, PatchTST, and TimesNet show more stable performance than DLinear across different $\beta$ values. We will include the sensitivity analysis on $\beta$ in our revised version. We hope that the clarification improves your confidence in our work. Let us know if you have any further concerns. --- Rebuttal 2: Title: We would like to hear from Reviewer rxdY Comment: Dear Reviewer rxdY, As the discussion period is close to end, we would like to follow up and ensure that our responses have adequately addressed your concerns. We sincerely appreciate your comments and suggestions, which have significantly contributed to improving our paper. In response to your valuable feedback, we conducted additional experiments on significance test, complexity evaluation and hyperparameter ablation study. We would be more than happy to further discuss if there are any remaining questions. Thanks again for your time and consideration. Regards,\ Authors --- Rebuttal Comment 2.1: Comment: Thanks for the authors' rebuttal. Since the authors well addressed my concerns, I will raise my score.
Summary: The paper presents a new Channel Clustering Module (CCM) for time series forecasting, which dynamically groups channels based on intrinsic similarities to balance the strengths of Channel-Independent (CI) and Channel-Dependent (CD) strategies. CCM improves forecasting accuracy by enhancing model generalization and capturing essential cross-channel interactions, achieving significant performance gains in both long-term and short-term forecasting. The module also supports zero-shot forecasting and improves the interpretability of complex time series models. Extensive experiments demonstrate CCM's effectiveness and adaptability across various mainstream time series models. Strengths: 1. The paper introduces a new Channel Clustering Module (CCM) that balances individual channel treatment and cross-channel dependencies, combining the strengths of Channel-Independent (CI) and Channel-Dependent (CD) strategies. 2. CCM enables zero-shot forecasting, leveraging learned prototypes to handle unseen samples effectively. Weaknesses: 1. The introduction of CCM increases the model's complexity, particularly for original CD models, which may result in higher computational overhead. 2. The paper acknowledges that the scalability of CCM to extremely large datasets remains untested, which could be a limitation for practical applications requiring the processing of large-scale data. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How can the similarity metric be further refined to enhance the clustering quality and address performance variations across different domains? 2. What strategies can be employed to test and ensure the scalability of CCM for extremely large datasets? 3. How does CCM perform in real-world applications with diverse and dynamic datasets, and what adjustments are necessary to optimize its performance in such scenarios? 4. What techniques can be implemented to mitigate the increased computational overhead introduced by CCM, especially for models with high channel and cluster counts? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. CCM does not outperform CI/CD strategies in a few cases, possibly due to the underlying channel relationships in certain real-world domains not aligning well with the similarity metric used by CCM. 2. The increased model complexity introduced by CCM may lead to higher computational costs, particularly for models with numerous channels and clusters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable feedback, which significantly improves the quality of this work. We also address below the potential concerns. >The introduction of CCM increases the model's complexity. The scalability of CCM to extremely large datasets remains untested. What strategies to ensure the scalability of CCM? What techniques to mitigate the increased computational overhead of CCM? As discussed in Sec.4.3, the computational complexity of CCM scales linearly with the number of channels $C$, which is computationally efficient for real-world deployment. Our extensive experiments, including large-scale datasets such as traffic (15,122,928 observations) and stock (13,900,000 observations), provide strong evidence for CCM's promising scalability on extremely large-scale datasets: - CCM consistently **reduces model complexity for CI models** (e.g. DLinear and PatchTST), regardless of dataset scale or size (Fig. 5). - The **linear scaling w.r.t channel count** enables efficient handling of high-dimensional data, crucial for many practical applications. - Our experiments reveal that the **performance gains achieved by CCM are maintained across different dataset sizes**, suggesting that the method's benefits are robust and not limited to specific data scales. This demonstrates the efficiency and scalability of CCM, making it promising for extremely large-scale real-world deployments. Furthermore, we've identified several strategies to optimize the scalability of CCM, including leveraging parallel and distributed frameworks for cluster assigner training and applying sampling on time series similarity computation to optimize computational resources. Algorithmic optimizations, such as efficient and fast attention with linear time complexity further support CCM's scalability. However, we would like to emphasize that **these techniques are compatible and orthogonal to the contribution of this manuscript** and deserve substantial additional research that would expand beyond our current scope. We appreciate your suggestion and will expand our discussion on computational efficiency in the revised version. >How can the similarity metric be further refined to enhance the clustering quality and address performance variations across different domains? The similarity metrics and alternatives are discussed thoroughly in Appendix A.1. We select the Euclidean-based similarity metric due to its efficiency and generalization. Our extensive experiments also justify the effectiveness and robustness of this metric in evaluating the cross-channel similarity (see evidence in Appendix E) and consistently enhancing model performance. The similarity metric can be refined by incorporating domain-specific feature selection and engineering or dynamically adjusting parameters based on data distribution characteristics. These techniques are compatible and orthogonal to the proposed method, so we consider them as future work. Again, we would like to emphasize that the current metric already demonstrates the efficiency across different domains and serves as a generalizable tool to evaluate channel similarity. >How does CCM perform in real-world applications with diverse and dynamic datasets, and what adjustments are necessary to optimize its performance in such scenarios? Firstly, we clarify that **our experiments already showcase CCM’s robust performance in real-world applications with diverse datasets** (such as electricity, traffic, etc.). As mentioned in Line 246-247, CCM improves long-term forecasting performance in 90.27% of cases in MSE and 84.03% of 247 cases in MAE across 144 different experiment settings. Moreover, **CCM's versatility extends beyond scenarios where cross-channel relations remain static over time**. It supports dynamic clustering across different minibatches and time steps, which is crucial for practical applications. Due to the space limit, we discussed possible improvements such as incorporating dynamic similarity metrics or using domain-specific similarity metrics in Appendix F. >Explanations of a few cases that CCM does not outperform CI/CD strategies. As we clearly discussed in Appendix C.5, CCM method is more useful in scenarios where channel interactions are complex and significant, which is usually the case in real-world data. Therefore, in datasets where channels are nearly independent or where the inter-channel relationships are not as pronounced, the benefits of CCM's clustering mechanism may be less significant. We would like to emphasize that the average improvement rate across all cases (with different base models / datasets / forecasting lengths) is 2.443% in the long-term and 6.456% in short-term benchmarks. Specifically, CCM improves the performance in 90.27% of cases in MSE and 84.03% of cases in MAE in long-term benchmarks, given the context that the base models are already optimized for high performance. Therefore, any additional gains through CCM are noteworthy and indicative of its effectiveness in refining forecasting performance. We hope that the above clarification improves your confidence in our work. Let us know if you have any further questions/concerns. --- Rebuttal 2: Title: We would like to hear from Reviewer yXcm Comment: Dear Reviewer yXcm, We sincerely appreciate your comments and suggestions. As the discussion period is close to end, we would like to follow up and ensure that our responses have adequately addressed your concerns. We would like to emphasize that **most of the issues raised were minor and had already been addressed or discussed in the original submission and appendix.** Nevertheless, we have taken this opportunity to clarify these points and further improve the revision. Specifically, we have added more discussion on the model efficiency and potentially improved similarity metric (please refer to the author rebuttal and global comments). We would be more than happy to further discuss if there are any remaining questions. Thanks again for your time and consideration. Regards, \ Authors
Summary: The paper proposes a new Channel Clustering Module and a corresponding Cluster Loss to group similar channels using a cross-attention mechanism. This creates a hybrid between channel independent and channel dependent approaches. Experiments in the paper compare both with and without the proposed module on a variety of existing state of the art methods and are run on time-series datasets in different domains on both short and long-term time horizons and demonstrate that this approach is generally applicable regardless of the domain. These experiments show an improvement in performance over prior work while also enabling zero-shot forecasting. The clustering produced by this module surfaces relationships between channels which are useful for feature analysis and interpretability. Strengths: The paper is well written and easy to comprehend and has good mathematical background. The CCM and the cluster loss is well motivated and utilizes the Gumbel-softmax to represent cluster assignment in a differentiable manner. The technique is evaluated on a variety of time series datasets across diverse domains in both a short and long-term setting. This demonstrates that the method is useful across different models and generally reduces error. The computational complexity of the CCM in the inference setting is also included. Weaknesses: The paper is evaluated on a variety of real-world datasets but evaluation on some synthetic data to highlight scenarios where existing methods produce a higher error but adding the clustering module reduces the error would help solidify the claims of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: - In section 4.1, why shuffle channels in each batch? Channel independent implies each channel is handled individually while the motivating toy experiment here changes the channel every batch. What happens when not shuffling the channels? - How does the prediction error change when the prediction time horizon is changed between training and inference? Is it possible to modify the time horizon after training? - In section D.3, how do you use random and k-means for cluster assignment? Additional details about these experiments would help clarify quality improvement described in the table. - How do you expect the performance of the CCM to change when the channel count is on the order of millions? - What does a cluster ratio of 1.0 mean? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors list that the work needs to be tested on large datasets and that there is an additional performance overhead of the proposed CCM module. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and positive comments. We address the potential concerns as follows. >Adding evaluation on synthetic dataset will help solidify the claims. While synthetic datasets serve their purpose in controlled experiments, our study's emphasis on real-world datasets is crucial and more reasonable for validating the practical applicability and robustness of the proposed CCM. Unlike synthetic datasets generated in previous studies, which may not encompass the full spectrum of real-world complexities, we leverage diverse real-world benchmarks. These datasets inherently capture intricacies such as varied data distributions, noise levels, and nuanced interdependencies between channels in real-time forecasting scenarios. This focus allows us to demonstrate CCM's effectiveness in addressing practical challenges across different domains, highlighting its superiority over synthetic benchmarks that often fail to replicate such complex interactions. >Why shuffle channels in each batch? What happens when not shuffling the channels? Shuffling channels aim to remove channel identity information. In the toy experiment in Sec 4.1, we train the model in two patterns: A) the model is trained on the original dataset and B) The model is trained on the randomly shuffled dataset. Let $W_i$ represent the weights for the $i$-th channel. In Pattern A, $W_i$ is optimized upon the $i$-th channel only, while in Pattern B, the optimization of $W_i$ will lose channel identity information, which causes performance degradation on both CI and CD models (from Table 1). Therefore, this toy experiment justifies that channel identity information benefits model performance in general. >How does the prediction error change when the prediction time horizon is changed between training and inference? Is it possible to modify the time horizon after training? Yes, it is possible to adjust the forecasting horizon after training, although it's not typically done in practice. When shortening the forecasting horizon post-training, the prediction error remains relatively stable because the training loss averages errors over each future step. However, if extending the horizon (e.g., from H1 to H2 where H2 > H1), the conventional approach involves forecasting H1 steps first, then using that prediction as input to forecast the subsequent H2 - H1 steps. This sequential process tends to increase prediction error due to accumulated inaccuracies over the extended horizon. >How do you use random and k-means for cluster assignment? In the Random method, each channel is assigned to clusters with uniform probability. We use *sklearn.cluster.KMeans* to implement the k-means algorithm with a default maximum number of iterations (*max_iter*=300). The input channel embeddings remain the same as that of our proposed Cluster Assigner. We keep the number of clusters consistent for three clustering methods in the ablation study. We will add details in our revised version. >How do you expect the performance of the CCM to change when the channel count is on the order of millions? Our clustering technique, which scales linearly with respect to channel counts, has been tested to efficiently handle large datasets. Numerous experiments across diverse datasets, including large-scale datasets such as traffic (15,122,928 observations) and stock (13,900,000 observations), provide strong evidence for CCM's promising scalability on large-scale datasets: 1) CCM consistently reduces model complexity for CI models (e.g. DLinear and PatchTST), regardless of dataset scale or size (Fig. 5); 2) The linear scaling w.r.t channel count enables efficient handling of high-dimensional data, crucial for many practical applications; and 3) our experiments reveal that the performance gains achieved by CCM are maintained across different dataset sizes, suggesting that the method's benefits are robust and not limited to specific data scales. Therefore, CCM is supposed to effectively group channels based on similarities while maintaining optimal computational efficiency. >What does a cluster ratio of 1.0 mean? A cluster ratio of 1.0 indicates that the number of clusters equals the number of channels, but it does not imply channel independence. In our experiments, even with a cluster ratio of 1.0, we observed clustering phenomena in channel embeddings, albeit with some empty clusters. This ratio is maintained for experimental rigor and to validate the robustness of the model. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. These have addressed most of my concerns. --- Rebuttal 2: Title: We would like to hear from Reviewer 9puR Comment: Dear Reviewer 9puR, As the discussion period is close to end, we would like to follow up and ensure that our responses have adequately addressed your concerns. We sincerely appreciate your comments and suggestions, which have significantly contributed to improving our paper. We would be more than happy to further discuss if there are any remaining questions. Thanks again for your time and consideration. Regards, Authors
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits
Accept (poster)
Summary: The paper derives a new time-independent inequality for likelihood ratios in the Generalized Linear Model. The proof is based on a PAC-Bayesian approach with a well-chosen prior. The result is applied to GLM bandit models, improving a variant of GLM-UCB. The resulting regret bound removes a exponential dependency in the norm of the unknown parameter. Several examples of GLM are discussed, and minimal numerical experiments are reported. The comparison with OFULLog+ is convincingly exposed, both in theory and in practice. Strengths: Overall, I consider that the submission is technically sound, somewhat incremental, but will be of interest to the community. Weaknesses: The submission is correctly written, but with a number of clumsinesses (some are listed below). The main text contains not much more than the statement of the results and the (unsurprising) description of GLM-UCB+, while the supplementary material is written a collection of appendices that are not very well presented. For example, the first appendix is called "Missing Results", which is pretty unspecific. I did not check all the details, but the main lines look correct. l.69 undefined notation \mathcal{B}^d(1) l.77 give a reference l.80 could you precise what is R_\mu in those examples? l.95 the sentence is grammatically wrong, something is missing Thm. 3.1: replace "where" by "for the choice" l.129 it is not the log-likelihood but the likelihood l.583 is it not possible to suggest a proof instead of a reference to "WoframAlpha" (to a plot?) Technical Quality: 4 Clarity: 3 Questions for Authors: How smaller can \beta_t(\delta) be for a confidence region valid only for a given t>0 (instead of all t>0)? This might be worth writing in the paper Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: This section does not really seem relevant to this mostly theoretical submission Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback and suggestions, and for recognizing our paper's technical soundness and contributions. **Number of clumsinesses, Typos, Organization issues** We apologize for the typographical errors and the organizational issues identified in the manuscript. We are committed to correcting these in the revised version, including re-organizing the appendices as suggested by the reviewer. We sincerely appreciate the reviewer's effort in pointing out these issues. To ensure clarity, we address each typo as follows: - Line 69. $\mathcal{B}^d(1)$ is the ball of radius 1 centered at the origin in R^d. - Line 77. The mentioned properties of GLM are provided in [14], which we will add as reference - Line 80. Of course. For Gaussian, $R_{\dot{\mu}} = 1$; for Poisson, $R_{\dot{\mu}} = e^S$; for Bernoulli, $R_{\dot{\mu}} = \frac{1}{4}$. Thank you for this suggestion, which would make our presentation clearer. - Line 95. Yes. We will appropriately change the sentence. - Line 129. Yes, it should be the likelihood ratio. - Line. 583. We sincerely apologize for this mishap. Due to the pressing time, we forgot to replace this with rigorous proof. Below, we provide rigorous proof of this result, which will, of course, be reflected in the revision: Let us denote $f(p) := p^{-1/2} \left( \mathbb{E}[{X}^p] \right)^{1/p}$ for $p \in \mathbb{N}$. Then, using well-known properties of the Gamma function, we have that \begin{equation*} f(2p) = \sigma 2^{\frac{2p-1}{4p}} (2p)^{\frac{1}{2p} - \frac{1}{2}} \left( (p-1)! \right)^{\frac{1}{2p}} = \sigma \sqrt{ p^{-1} \left( \sqrt{2} p! \right)^{\frac{1}{p}} } \end{equation*} and \begin{equation*} f(2p - 1) = \sigma 2^{\frac{2p - 2}{2(2p - 1)}} (2p - 1)^{\frac{1}{2p - 1} - \frac{1}{2}} \left( \sqrt{\pi} \frac{(2p - 3)!!}{2^{p - 1}} \right)^{\frac{1}{2p - 1}} = \sigma (2p - 1)^{\frac{1}{2p - 1} - \frac{1}{2}} \left( \sqrt{\pi} (2p - 3)!! \right)^{\frac{1}{2p-1}}, \end{equation*} where we define $(-1)!! := 1$. Then, we have that \begin{equation*} f(2p) \overset{(i)}{<} \sigma \sqrt{p^{-1} (\sqrt{2} p^p)^{\frac{1}{p}}} = \sigma 2^{\frac{1}{4p}} \leq \sigma 2^{\frac{1}{4}}, \end{equation*} where $(i)$ follows from $p! < p^p$. We also have that \begin{equation*} f(2p-1) \overset{(i)}{<} \sigma (2p - 1)^{\frac{1}{2p - 1} - \frac{1}{2}} \left( \sqrt{\pi} (2p - 1)^p \right)^{\frac{1}{2p-1}} \overset{(ii)}{<} \sigma \left( \sqrt{\pi} (2p - 1) \right)^{\frac{1}{2p-1}} \overset{(iii)}{<} \sigma \sqrt{\pi}, \end{equation*} where $(i)$ follows from $(2p - 3)!! < (2p - 1)^p$, $(ii)$ follows from $\frac{p}{2p - 1} > \frac{1}{2}$, and $(iii)$ follows from the observations that for $z \geq e$, $f(z) = (\sqrt{\pi} z)^{1/z}$ is decreasing, and $f(1) = \sqrt{\pi} > f(3) = (3 \sqrt{\pi})^{1/3}$. The first observation can be easily verified as follows: $\frac{d}{dz} \log f(z) = \sqrt{\pi} \frac{1 - \log z}{z^2} \leq 0, \quad \forall z \geq e.$ Finally, as $2^{1/4} < \sqrt{\pi}$, we have that $\sup_{p \in \mathbb{N}} f(p) \leq \sqrt{\pi} \sigma.$ ---- **How small can $\beta_t(\delta)$ be when we don’t require the confidence region to be time-uniform?** This is an interesting question, and thank you for bringing this up. First, we note that the mixture of martingale-type arguments (e.g., our PAC-Bayesian time-uniform guarantees) usually gives an anytime-valid guarantee *for free*. If the reviewer is asking about the optimal $\beta_t(\delta)$ when the data $\mathcal{D}_t = \\{ (x_s, r_s) \\}\_{s \in [t]}$ can be adaptively collected for a given $t$, we don't have a definitive answer. This resembles the fixed-budget best-arm identification in bandits [15], which may be a good starting point for this direction. When the data $\mathcal{D}_t$ is collected independently, the problem reduces to the fixed-design setup. For the linear case, one has the confidence region of the following form [Eqn. (20.2) of 16]: for any fixed $x \in \mathcal{B}^d(1)$, \begin{equation} \mathbb{P}\left( \langle x, \hat{\theta}_t - \theta\_{\star} \rangle \leq \mathcal{O}\left( \lVert x \rVert\_{V_t^{-1}} \sqrt{\log\frac{1}{\delta}} \right) \right) \geq 1 - \delta, \end{equation} where $V_t = \sum_{s=1}^t x_s x_s^\top$ is the design matrix that we assume to be invertible for now. For the logistic model, we have the following [Theorem 1, 17]: \begin{equation*} \mathbb{P}\left( \langle x, \hat{\theta}_t - \theta\_{\star} \rangle \leq \mathcal{O}\left( \lVert x \rVert\_{H_t^{-1}} \sqrt{\log\frac{t\_{eff}}{\delta}} \right) \right) \geq 1 - \delta, \end{equation*} where $t\_{eff}$ is the number of distinct vectors in $\\{x_s\\}\_{s \in [t]}$ and $H_t = \sum_{s=1}^t \dot{\mu}(\langle \widehat{\theta}_t, x \rangle) x_s x_s^\top$ is the Fisher information matrix that we assume to be invertible for now. In both cases, the confidence width is dimension-independent and asymptotically optimal with respect to the Cramér-Rao lower bound. Thus, for each fixed $t$, we indeed have a tighter $\beta_t(\delta)$.
Summary: Confidence intervals for GLMs using a PAC-Bayes approach. Strengths: Confidence sequences and bandit algorithms for GLMs are a hot topic, and the authors contribution to that topic is solid, both because of the strength of the result itself, and because the proof itself is very simple and easy to verify---something that cannot be said about some of the previous work on the topic. Weaknesses: The paper is written in a style that might be described as "being written for the reviewers". It contains misleading and inaccurate claims which are meant to, presumably, impress the reader with the quality of the results. I give some examples shortly. Its unfortunate, because the ideas in the paper are good, and if only the authors had simply written about their work in a neutral, descriptive manner, I would absolutely recommend that the paper is accepted. However, I do not believe the overclaiming to be some sort of an accident---which if it were, the authors might fix upon my pointing it out---and so without any system for "revise and resubmit", must recommend rejection. Some examples of problematic claims (emphasis mine in all quotes): 1) Lines 1-2, the authors claim that "[...] for any generalised linear models (GLMs) that is guaranteed to be convex and numerically tight". To my understanding, the authors have no guarantee that their result is numerically tight for every GLM, and thus the first sentence of the abstract is false (is there even a guarantee in the paper that there exists a single GLM for which it is numerically tight?). 2) Line 53, "Our main novely lies in __cleverly__ using [...]"; whether the approach is clever or not is perhaps something for readers to judge, not for the authors to proclaim. 3) Line 107, "we completely remove the poly(S)-dependency from the radius, resolving one of the __open problems__ posited by Lee et al. (__2024__)." An open problem is an well-known unsolved problem in a field (that is, a problem that numerous other researchers have attempted and failed to solve). Your "open problem" is not that. Rather, it is a single sentence in the middle of a paper published at a conference that took place less than 3 weeks before this paper was submitted that simply states that the authors leave it for future work. 4) Line 111-112 authors claim that "perhaps more importantly, [ellipsoidal confidence sequences allow] one to equivalently rewrite the optimistic optimization in the UCB algorithm as a closed-form bonus-based UCB algorithm". Is this true? The claim appears to be that, if $\mathcal{X}$ is a subset of the unit ball and $\Theta$ is an ellipsoid, then the quantity $b(\Theta) = \\max_{x \in \mathcal{X}} \max_{\theta \in \Theta} \langle x, \theta \rangle$ has a closed form expression. Could the authors state the closed form expression they have in mind for, say, $\mathcal{X}$ being an irregular polytope with $2^d$-many vertices? And if the resulting expression requires $O(2^d)$ time to evaluate... then the statement is trivial, and ellipsoidal form of the confidence sequences is unimportant. (and of course, fit a spline to those 2^d vertices, and now you cannot even write the solution as a sum over the vertices!). 5) Lines 307-308: in the conclusion, the authors state that "[their algorithm] is numerically verified to be the best performing algorithm." The authors compared against __one baseline__ on a __single experiment__ varying only __one experimental parameter__ between __two values__, and the experiment was __two dimensional__. *Really?* Other than that, section 3.3 seems like it could contain interesting insight, but I'm worried that the way it is written, the only persons that might be able to understand it are either the authors themselves, or someone that has spent just as much time poring over the paper as the authors have done. I would suggest that the authors either expand on it and explain it better, or cut it. Some minor typos: - Line 312-313 you talk of "Bayesian randomized (exploration) algorithms for GLM bandits", but then cite 5 papers for these algorithms, out of which the four I am familiar with are frequentist algorithms, not Bayesian. - Line 95-97, second half of sentence seems to be missing - Line 129, the definition looks like the likelihood ratio, not the log-likelihood ratio as claimed - Theorem 3.2, second line - Line 82, the set in probability is wrong - Line 75 rewards should be in filtration Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer’s valuable reviews and comments. **W1. “Numerically tightness” of CS** We intended "numerically tight" to mean numerically tighter than previously known and non-vacuous, a phrase often used in prior works on PAC-Bayes. We will clarify this sentence in the revised manuscript. **W2. “Cleverly”** We concur with the reviewer that the novelty of the approach should be left to the reader's judgment. Accordingly, we will remove this word. **W3. Open problem should be well-known that numerous other people should’ve tackled and fail** We start by pointing out that obtaining a tight (hopefully optimal) regret bound for generalized linear bandits, including logistic bandits, has been a long unresolved open problem in bandits, starting from the seminal work of Filippi et al. (2010) and tackled by numerous researchers. Although we cited Lee et al. (2024) [1] as the source of the open problem, this is to emphasize the dependency of $S$ that has been ignored for quite sometime before [1]. We will fix the references to the original reference and provide more context into the importance of our open problem. We also highlight that Lee et al. (2024) [1] has been available on arXiv since October 2023. In our submission, we cited it using the BibTeX of its most recent conference version in 2024. **W4. Importance of ellipsoidal confidence set** Thank you for pointing this out. We would like to clarify that our original statement did not imply that the solution to the UCB $\max_{x \in X} \max_{\theta \in \mathcal{C}_t(\delta)} \mu(\langle x, \theta \rangle)$ has a closed solution. Upon further reflection, we acknowledge that the current wording may lead to misunderstanding, and we appreciate the reviewer for highlighting this. In Global Response #2, we elaborate on our intended meaning and how it can be corrected. We assure the reviewer that these corrections will be reflected in the revised manuscript. Additionally, as the reviewer mentioned, if the arm set is of combinatorial size, e.g., $2^d$, then we are indeed bound to incur at least $\Omega(2^d)$ computational cost. However, we demonstrate in Global Response #2 that if the CS is precisely an ellipsoid, the computational cost is $O(2^d)$, which is significantly more efficient than solving a convex optimization for each arm. **W5. Weak experiments** Please refer to Global Response #1. **Regarding Section 3.3** Thank you for your suggestion. We will make sure to explain the relationships more clearly or consider removing this section from the revision. **Typos** We apologize for the typographical errors and will commit to correcting them in the revision. We sincerely thank the reviewer for taking the time to point them out. To ensure clarity, we address each typo as follows: - Lines 312-313: The reviewer is correct. We will correct it to “randomized (exploration) algorithms.” - Lines 95-97: We will revise this sentence appropriately. - Line 129: It should indeed be the likelihood ratio. Theorem 3.2, second line: We will correct it to $\mathbb{P}(\exists t \geq 1 : \theta_\star \not\in \mathcal{E}_t(\delta)) \leq \delta$. - Line 82: We will correct it to $\mathbb{P}(\exists t \geq 1 : \theta_\star \not\in \mathcal{C}_t(\delta)) \leq \delta$. - Line 75: The rewards should indeed be in the filtration. We hope that our response has properly addressed the reviewer’s concerns and that the reviewer would reconsider the score. Thank you. --- Rebuttal 2: Title: Review of the review Comment: I wanted to voice that I find this review to be an unfair, if not adversarial, assessment of the work. The review makes some comments about writing style until line ~110 and then something about the conclusion. This makes me wonder, if the reviewer read the full paper with sufficient care and give it enough thought. It has become common fashion to emphasize the paper's contributions in the introduction. Compared to the standard, I would **not** say that this paper is heavily exaggerating the results that follows. However, I realize that qualitative descriptions of the results can through-off a reader, e.g. them getting mad over the usage of a simple adjective ("cleverly"). I think the paper indeed makes non-trivial contributions, and has a fresh approach (combining Ville's with the change of measure technique in the proof) that I have not seen in this immediate area of work. In their rebuttal, the authors also provide a benchmark including the most related work, even reporting the runtimes. The benchmark is still for a synthetic dataset, with small values of $S$, but already gives some insight into the average case performance of the algorithms (as the bounds are worst-case wrt the reward class). I think the review focuses on form/presentation, and misses out on the content. I am not sure how useful or reliable is this type of review, to make a fair assessment of soundness and relevance of the contributions.
Summary: This paper proposes Likelihood ratio based confidence sequences for generalized linear inference, who's width only depends logarithmically on $S$, the bound on norm of the parameter vector. This is achieved by utilizing a pac-bayesian change of measure inequality, with the prior and posterior distributions chosen very carefully. Tightness of the bounds are compared with some prior work and a small numerical experiment is provided to demonstrate potential benefits of the confidence bounds, for the application of logistic bandits. Strengths: - Key contribution is reducing the $\mathrm{poly}(S)$ dependency of the width of the confidence sets to $\log S$. Practitioners typically do not use state-of-the-art CS prescribed by theory because 1) they are tough to compute 2) depend on unknown parameter. This paper takes a step towards mending this gap by addressing the second issue. Since dependence of the width on $S$ is logarithmically, then the practitioner can choose a conservatively large upper bound on $||\theta_\star||$ and not suffer from loose/uninformative CS. - The work benefits from versatility of LR-based CS and automatically gives results that are applicable to multiple data/noise models. - Gives convex CS for logistic bandits that have rate optimal dependence on $\kappa$ parameter. I might be wrong, but as far as I know, prior work with such dependency on $\kappa$ resort on non-convex sets [Faury 2022]. *I am not aware of the latest results, particularly [Lee 2024].* - The proof is light and combines simple ideas from not-so-connected areas. Compared to prior work on parallel topics (on GLM, logistic, or poisson models), this seems like a more elegant approach to prove anytime validity of confidence sequences. Weaknesses: I think improving dependency on $S$ gains significant relevance, once it yields a practical algorithm. If there is no computationally efficient/stable way of calculating the confidence sets, then the relevance of the results is limited to a subset of the bandit theory community. Maximising the UCB on the proposed sets (Thm 3.1 and Thm 3.2) does not seem to be a computationally feasible task, particularly for higher dimensions. Further, I'm afraid that practical relaxations/approximations, would blow up the width, and loosing the current theoretical edge. - The ellipsoid sets are proposed as a easy-to-implement alternative (which actually might not be the case, due to the hessian in the norm). But it is not clear to me if they are similarly tight. Depending on the parameter $R_s$, these sets may again scale with $\mathrm{poly}(S)$. - While the paper makes a valuable theoretical contribution, I think it would need more experiments to appeal to a broader community. For instance, a proper benchmarking of the CS against practically common choices (vanilla GP-based CS based on a very loose Linear regression model that uses a sub-Gaussian noise with a large variance) and showing that the statistical gains are worth the computational effort by reporting the regret, and the computational efficiency (e.g. number of flops). - Experiments do lack comparison with relevant baselines and are only in the logistic case. In particular, Emmenegger 2023 and Faury 2022 are two algorithms that should be compared to. IIRC, Emmenegger 2023 does not work well for logistic bandits, so this would help strengthen the message of the paper. However, Faury 2022 seems to make a strong case on the joint computational and statistical efficiency of their algorithm (for logistic bandits). This is not a weakness, but I should mention that I am not personally aware of the common rates and parameter dependencies in LR-based confidence sequences. Therefore, I *could not* verify the dependence of the width on certain parameters, so I don't know if everything is optimal, or we are sacrificing something else on the path to $\log S$. Technical Quality: 4 Clarity: 4 Questions for Authors: ### Questions: 1) For equation (3), I did not understand why this is a batched estimator and not a sequential one. Can you clarify your terminology of batched vs sequential estimator? 2) Is there theoretical benefit/necessity to solving the constrained problem rather than regularizing? I realize that for validity of the CS, we require $\hat \theta_t$ to also lie within $\Theta$, but is there any other reason why we consider the MLE rather than generalized ridge loss? Asking because from a practical perspective, the latter is better/more stable. 3) What is the dependency of Theorem 3.2 on $S$? Mainly, do the ellipsoid sets blow up for logistic bandits? 4) Can you comment on the computational complexity of calculating the CS of Theorem 3.1 and Theorem 3.2? My feeling is that both are computationally tough to calculated (e.g. due to the lipschitz constant, or the matrix norm wrt the hessian matrix). 5) Do you see a clear kernelized extension? In particular, is there a choice of prior-posterior for which the KL divergence is bounded and allows for a similar rate? 6) Is Theorem 4.1 minimax optimal, e.g. wrt the lower-bound of Abeille 2021? 7) In Fig 1-(c), is the logistic CS is automatically an ellipsoid, or is it just resembling one? 8) Are you using the theoretical $\beta_t(\delta)$ for Fig (1-a) and (1-b)? _____ ### Small typos & Suggestions - [Lines 57, 67, 69] space before parenthesis missing - [Line 72] notation for derivates isn't coherent. Both $\dot{}$ and ${}^\prime$ are used. - [Line 73] would make more sense to write for $t \geq 1$, since all statements are anytime. - [Line 95] The sentence does not have a verb. Perhaps instead of "Despite" authors meant "Exists". - [Line 100] First time I read "unbounded GML" I was confused. Would be good to mention that this refers to values o $\mu$. - [Theorem 3.1] Second line should be $\mathcal L_t$, the subscript is missing. - [Line 156] It is not clear what "it" refers to. Perhaps swap it out for "the analysis". - [Line 161] Would be good to include a reference for the first equality in the equation. Does this hold in equality or is it an upper bound? Is this common knowledge? I did not know it. - [Line 173] This line reads vaguely and informally, "even" is used twice. It's best if the justification is made rigorous or removed entirely. - [Fig 1] To demonstrate the robustness of the algorithm to $S$ you could've chosen a much larger value and still significantly outperform the baseline. Would be nice to see the effect of S = 10**{1, 2, 3, 4}. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: The proof technique might be limited to linear setting, with in my opinion can be viewed as a limitation, if one of the contribution points of the paper is the proof technique. Practical limitations and applicability are not adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for several enlightening questions and suggestions that will certainly improve our paper. **W1. & Q3. Statistical issues with ellipsoidal relaxation** Indeed, the ellipsoidal CS introduces an additional factor of $S$, as it uses the self-concordance control (Lemma A.4). For logistic bandits, the blowup is only by a factor of $S$, as $R_s = 1$ for Bernoulli distribution. Among the *ellipsoidal* CSs available for linear and logistic bandits, our ellipsoidal CS’s width is theoretically the tightest, ignoring absolute constants. Indeed, we believe that our ellipsoidal CS-based UCB algorithm attains $\mathcal{O}(d \sqrt{ST / \kappa_\star(T)})$ regret, which is strictly better than many baselines' regret guarantees, e.g., **ada-OFU-ECOLog**. Also, the complexity of computing the Hessian $H_t = \sum_{s=1}^t \dot{\mu}(\langle x_s, \hat{\theta}_t \rangle) x_s x_s^\top$ is similar to that of the design matrix in linear bandits, making our ellipsoidal CS practical. **W2. & W3. Lacking experiments** Please refer to our Global Response #1, where we address many of the reviewer’s suggestions and concerns about the experiments. As measuring the number of flops for constrained nonlinear convex optimization was unclear, in SPDF, we report the algorithms' runtimes (sec), measured by Python’s time module. **Q0. Are we sacrificing something for $\log S$?** Yes, obtaining $\log S$ requires more computational power (e.g., norm-constrained MLE). Please refer to our above response to W1 & W3. **Q1. Batched vs. sequential estimator?** We apologize for the confusion. These terms distinguish between $\mathcal{L}_t(\hat{\theta}_t)$, which uses a single MLE to compute the loss at $t$, and $\mathcal{L}_t(\\{\hat{\theta}_s\\}\_{s=1}^t)$, which uses all the MLEs computed to far to compute the loss at $t$ as in weighted likelihood ratio testing [4]. We will clarify this in the revision. **Q2. Why constrained MLE?** If we stick to the uniform prior, then we are not aware of any way to allow the regularized MLE since then we cannot ensure that there is a sufficient prior probability mass around the regularized MLE (the regularized MLE can be far outside of the support of the uniform prior). Constrained MLE ensures that the estimator falls in the support of the uniform prior, so one can find a sufficient probability mass around it. But then one may wonder why uniform prior? We made this choice since we found that the KL divergence between uniform prior and posterior has a closed form and that it provides us with much easier control of under-approximating the integral w.r.t. the posterior. Alternatively, one can consider, e.g., Gaussian prior, and attempt to make the regularized MLE work, but we found that terms are much harder to control. We have not done an exhaustive attempt on this, so we believe it is an interesting future direction! **Q4. & W1. Computational complexity of calculating the CS?** First, the radii in Theorem 3.1 and 3.2 can be explicitly computed using the Lipschitz constants for various GLMs provided in Table 1. Also, the complexity for computing the matrix norm w.r.t. the Hessian matrix $H_t = \sum_{s=1}^t \dot{\mu}(\langle x_s, \hat{\theta}_t \rangle) x_s x_s^\top$ isn’t too much, considering how its computation is composed of matrix-vector products. Please refer to our Global Response #2 for the computational issues in the UCB. **Q5. A clear kernelized extension?** No, extending to the RKHS setup seems non-trivial. The primary issue is that no translation-invariant Lebesgue measure exists in infinite-dimensional function space (e.g., RKHS), i.e., no uniform distribution and no concept of likelihood, among many counterintuitive issues. Indeed, the usual properties that hold in finite dimensions may fail in infinite dimensions (see, e.g., [13]), and one must be very careful when transferring the current PAC-Bayes proof to infinite dimensions. For instance, for the KL between two Gaussian measures to be well-defined, one measure must be absolutely continuous against another, which holds under certain nontrivial conditions on mean and covariance operators (Feldman–Hájek theorem). One could consider using the well-studied Gaussian Process prior/posteriors (as hinted at in our answer to **Q2**), but choosing the mean and kernel to obtain similar guarantees is non-trivial. **Q6. Is Thm 4.1 minimax optimal?** Somewhat yes. As elaborated in lines 250-272, the leading term of our regret bound for *logistic bandits* is indeed (locally) minimax optimal in $d, T, \kappa_\star(T)$ relative to the lower bound of [2]. Moreover, a closer look into their proof shows that their lower bound scales as $1 / S$, indicating a gap of $S$ between the lower and upper bounds. To the best of our knowledge, there isn’t any generic regret lower bound that holds simultaneously for all **GLB**s. We suspect a similar $d\sqrt{T / \kappa_\star(T)}$ lower bound would hold for **GLB**s. One could either carefully modify the proof of [2] for GLMs (e.g., their relative entropy decomposition lemma (Lemma 6) relies on the fact that the reward distribution is Bernoulli) or come up with something new. We leave this as future work. **Q7. Is Fig 1(c) ellipsoid?** No, our CS at each time $t$ is a level set of the negative log-likelihood loss $\mathcal{L}_t(\cdot)$, which isn’t precisely an ellipsoid. But, as the CS tightens, it more closely resembles an ellipsoid, as the second-order Taylor approximation (an ellipsoid as shown in our Theorem 3.2) gets tighter. We will elaborate on this in the revision. **Q8. Are you using theoretical $\beta_t(\delta)$ for experiments?** Yes, we use the precise theoretical $\beta_t(\delta)$ in all of our experiments, including additional ones in the supplementary pdf. --- Rebuttal Comment 1.1: Title: Reviewer discussion Comment: Thank you for your response and the new benchmarks. Interesting to see how close EMK and OfUGLB perform (statistically and computationally). I still think that you might be able to demonstrate a bigger advantage, if you consider significantly larger values for $S$ in the experiments, or consider benchmarks in which the true $S$ is not known.
Summary: This work considers generalized linear models where the distribution of observations, conditionally on a context vector $x$ and an unknown parameter $\theta^\star$, are generated from a (known) exponential family of distribution. Their main contribution is a new confidence sequence for online estimates of $\theta^\star$, leading to improved regret in the context of Generalized Linear Bandits when used to calibrate a UCB algorithm corresponding to the model for the observations. Strengths: The paper is well written and easy to follow, and the presentation of the technical arguments is very pedagogical. The literature review seems extensive, and the authors carefully separated the literature that inspired the design of the confidence sequence and the literature related to GLM and bandits. The results are also well presented, with explicit constants so that it is not difficult to reproduce the results from the paper. In that regard, I appreciated how the authors cared about facilitating future implementation of their approach. I also appreciated the precise derivations proposed for some specific families of distributions, that are good illustrations of the results. Weaknesses: I am not very familiar with a large part of the literature invoked by the authors, basically the literature presented in Section 3.3. Hence, it is quite difficult for me to assess the technical contribution of the paper (not a weakness, but I'm using this space to say it). For a non-expert reader, Section 3.2 is a bit hard to follow. In particular, I did not see the connection between the result and Theorem 3 of Foster et al. (2018). Regarding the bandit part, it might be beneficial to extend the "proof sketch" part to establish the main arguments that are different from previous works. In particular, l. 215 the authors say "one needs extra care in the analysis to ensure that the regret bound is also tight", but after that it seems that all arguments are standard. It might be interesting to provide more details or to remove that sentence. Technical Quality: 3 Clarity: 4 Questions for Authors: * The main novel arguments seem to be located in l.153-167. My understanding of the results (please correct me) is that the goal is to use Lemma 3.3 with a nice choice of prior/posterior distributions such as to make the KL divergence as small as possible. However, in Eq. (9) l.159 I fail to see why $P_t$ is a valid posterior given the prior and empirical data. Again, I am not a specialist of the PAC-Bayes literature so I'm sorry if the answer is obvious. * In the bandit part, bounds are established for Self-Concordant GLM. Does the constant $R$ need to be known by the algorithm or is it just involved in the analysis? * Can you elaborate on the connection with Theorem 3 of Foster et al. on Section 3.2? Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review, for recognizing the significance of our technical contributions, and for providing valuable feedback. **W1. Writing and paper organization + Regret Analysis** Thank you for your suggestions. Indeed, Section 3.2 implicitly assumes that the reader is familiar with PAC-Bayesian analysis. Due to space constraints, we could not provide more details and instead referenced two standard sources [9,10]. In the revision, we will ensure Section 3.2 is easier to follow. Additionally, as the reviewer noted, the proof sketch of the regret bound can be improved. We will address this in the revision. Allow us to elaborate on the main technical novelty of our regret analysis, which will be included in the revised version. The existing proof relies on the original self-concordance lemma [11, Lemma 9], which allows the appearance of $\nabla^2 \mathcal{L}_t(\theta\_{*})$ (related to the matrix $H\_t(\\theta\_\*)$ in [11]). Consequently, it allows for one to use the elliptical potential lemma (Lemma B.1), with $x_s$ replaced by $\sqrt{\dot{\mu}(\langle \theta\_\star, x_s \rangle)} x_s$. However, as described in our proof sketch, the self-concordance lemma introduces an additional factor of $S$. To avoid this and at the same time to use our confidence sequence (Theorem 3.1), we utilize the Cauchy-Schwartz w.r.t. $\widetilde{G}_t(\widehat{\theta}_t, \nu_t) = \lambda \mathbf{I} + \frac{1}{g(\tau)} \sum\_{s=1}^{t-1} \tilde{\alpha}_s(\widehat{\theta}_t, \nu_t) x_s x_s^\top$ (see Eqn. (26) in Appendix B.1). $\widetilde{G}_t$ arises naturally when one performs second-order Taylor expansion of $\mathcal{L}_t(\cdot)$ around its minimum $\widehat{\theta}_t$; in particular the confidence set can be rewritten as a quadratic form involving $\widetilde{G}_t$ (Lemma B.6). However, now note that the elliptical arguments are no longer applicable to $\sum_t \lVert x_t \rVert\_{\tilde{G}_t(\hat{\theta}_t, \nu_t)^{-1}}^2$. Our main technical novelty is designating the "worst-case $\theta$" over all future confidence sets from time $s$ to $T$ (see $\bar{\theta}_s$ in Eqn. (24) of Appendix B.1). With this, we can follow an alternate chain of inequalities to obtain $\sum_t \lVert x_t \rVert\_{\bar{H}_t^{-1}}^2$, where $\bar{H}_t = 2 g(\tau) \lambda \mathbf{I} + \sum\_{s=1}^{t-1} \dot{\mu}(\langle \bar{\theta}_s, x_s \rangle) x_s x_s^\top$ (see Eqn. (26) in Appendix B.1). Then the elliptical potential lemma is applicable with $x_s$ replaced by $\sqrt{\dot{\mu}(\langle \bar{\theta}_s, x_s \rangle)} x_s$, concluding our proof. Note that there are other details we are omitting here, but they are relatively minor manipulation of adding and subtracting the desired quantities to get to the place where we can apply the elliptical potential lemma as we stated above. **Q1. Why $P_t$ is a valid posterior?** The terms "prior" and "posterior" are somewhat misleading in PAC-Bayes, as they need not strictly be the Bayesian prior and posterior. Instead, "prior" should be interpreted as any data-free distribution. and "posterior" as any data- and prior-dependent distribution, providing more flexibility in choosing them. The reviewer is correct that one should choose the "prior" and "posterior" such that their KL divergence is not too large. Simultaneously, in our context, the posterior should be chosen so that the Lipschitz inequality is not too loose (see line 164). We refer the reviewer to the excellent introduction to PAC-Bayes by P. Alquier [9] and a survey on PAC-Bayes time-uniform bounds by Chugg et al. [10]. There is no need to apologize; we also found it challenging to adapt to PAC-Bayes terminologies while writing the paper. **Q2. Does the learner require the knowledge of $R_\mu$?** Thank you for highlighting this point. Indeed, the learner needs to know $R_{\dot{\mu}}$ or its upper bound. This requirement arises because the bandit algorithm relies on our new confidence sequence (Theorem 3.1), which in turn depends on the Lipschitz constant $L_t$ of the GLM negative log-likelihood, which depends on $R_{\dot{\mu}}$; see Table 1 and Appendix A of our submission. We will clarify this in the revision. **Q3. Elaborating on the connection of Theorem 3 of Foster et al. (2018) with our Section 3.2** To recall the proof of Theorem 3 of [12], they first consider a distribution $P_t(\cdot)$ over the parameter $W$ (see their Algorithm 1) and use $\eta$-mixability of the logistic loss to obtain an inequality involving the negative-log-integral term $\int_{\mathcal{W}} \exp\left( -\eta \sum_t \ell(Wx_t, y_t) \right) dW$. They define $S = \theta W^\star + (1 - \theta) \mathcal{W} \subseteq \mathcal{W}$, where $W^\star$ is the ground-truth optimal parameter and $\theta \in [0, 1)$ is to be determined later. The proof concludes by chaining the inequality $\int_{\mathcal{W}} \geq \int_S$ with the Lipschitzness of the logistic loss and expanding the integral. Our Section 3.2 follows a similar approach with some key differences. While their negative-log-integral also arises in our scenario, we adopt a more compact, streamlined PAC-Bayes approach. In our case, a similar quantity $\mathbb{E}\_{\theta \sim \mathbb{Q}}[\exp(-\mathcal{L}_t(\theta))]$ arises from our Donsker-Varadhan representation (Lemma 3.2), where $\mathbb{Q}$ is our prior. We then apply Ville’s inequality to obtain the time-uniform PAC-Bayes bound (Lemma 3.3), and our choices of prior/posterior resemble their choice of $S$. Our Lipschitzness arguments also resemble their $\ell\_{\infty}$ Lipschitzness argument (see their pg. 17 in the PMLR proceeding version). We will provide a more detailed explanation of the proof in the revision. Thank you for bringing this to our attention. --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed response, I have no more questions for now.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for providing detailed and insightful reviews. We are especially encouraged to see that the reviewers recognize the simple yet effective proof ideas (q1y1, HLaD), technically solid contribution in reducing $poly(S)$ to $\log S$ in the CS width (q1y1, HLaD, aQNL), easy-to-follow and pedagogical exposition (TTTd), thorough literature review (TTTd), and the importance of providing explicit constants for practitioners (TTTd). Here, we address two issues raised by multiple reviewers. In each reviewer's reply, we answer the remaining reviewers' comments point-by-point. We also attach a supplementary PDF (SPDF) containing additional experimental results. # 1. Lack of experimental verifications (q1y1, HLaD) Our experiments are meant to show that the theoretical improvement in $S$ for the logistic bandits is also numerically meaningful, as done in [1]. Still, we agree that our paper can benefit from more experiments to appeal to a broader community. SPDF provides additional experimental results for logistic bandits (Figure 1). We will make the codes publicly available. We expand upon the considered settings by varying $S\in\\{5, 7, 9, 11\\}$. We tried $S\in\\{10^2, 10^3\\}$, but we could not handle the numerical instability from the optimization problems in time. We will *continue to try* to get the higher $S$ to work as well. We fix $d = 2$ due to the lack of time and space in SPDF, and also because only then can the complete confidence sets be fully visualized without any projection. We will *continuously work* to include additional experimental results for other values of $d$ in the revision. Also, as suggested by both reviewers, we will work to extend our codebase to linear/Poisson bandits and include the results in the revision. Notably, we increased the number of baselines to eight, two of which are ours: **OFUGLB**, **OFUGLB-e** (ellipsoid CS), **EMK** [4], **RS-GLinB** [5], **OFULog+** [1], and **ada-OFU-ECOLog** [3]. **EMK** has been included as a practically common choice, as it performs better than other CS-based UCB algorithms, including GP-based CS [6]. For consistent experiments, we use the same setting as in our submission. The results show that even against the additional baselines, our **OFUGLB** attains the best numerical regret. Compared to **EMK**, note that for small $S$, ours performs slightly worse. Still, for large $S$, the trend suggests that **OFUGLB** eventually attains smaller regret than **EMK**, suggesting that the theoretical regret of **EMK** may be looser than ours in terms of $S$. Also, we emphasize that *we provide a rigorous, state-of-the-art regret guarantee* for **OFUGLB**, while their theoretical guarantee scales with $\kappa$. The ellipsoidal version **OFUGLB-e** is also shown to have reasonable performance and even outperforms **ada-OFU-ECOLog** for small values of $S$. Of all considered algorithms, **RS-GLinCB** seems to have the highest regret, even though it also has $poly(S)$-free regret. We believe this significant difference is from the fact that **RS-GLinCB** involves an explicit warmup stage, while ours doesn’t. Indeed, in [5], the authors considered $20000$ rounds for logistic bandits, where **RS-GLinCB** is shown to eventually outperform **ada-OFU-ECOLog**, while we only considered $4000$. We will run the additional experiments with increased rounds for the revision for a fair comparison. In SPDF's Table 1, we additionally report the runtime (sec) of a single run of each algorithm (measured via Python’s time module) for the logistic bandit instances. Note that **OFUGLB-e** is generally faster than **OFUGLB** by roughly 6 minutes. Also, **EMK** is generally slower than **OFUGLB**, as it needs to keep track of the entire sequence of estimators. # 2. Computational feasibility of UCB (q1y1, HLaD) Here, computational feasibility is being implementable using (efficient) convex solvers as subroutines, which is relatively standard in bandit literature [1,4,7]. While writing the response, we realized that the Theorem 3.2 should be $\mathcal{E}_t(\delta)=\left\\{\theta\in\mathbb{R}^d : \lVert\theta-\hat{\theta}_t\rVert\_{H_t}\leq\gamma_t(\delta)\right\\}$. We apologize for the confusion. With this, let us assume that the arm set $X$ is finite. Then, it is easy to see that the UCB using the ellipsoidal CS $\mathcal{E}_t(\delta)$ is computationally efficient, as it reduces the following closed-form *objective*: $$ \mathrm{argmax}\_{x \in X}\langle x,\hat{\theta}_t\rangle + \sqrt{\gamma_t(\delta)}\lVert x\rVert\_{H_t^{-1}}. $$ In other words, there is no need to solve a convex optimization at each time $t$ and each arm $x \in X$. In this case, the computational complexity of performing UCB at time $t$ is $\mathcal{O}(T\_{MLE}(t) + d^3)$, where $T\_{MLE}(t)$ is the complexity of computing the MLE at time $t$ and $d^3$ is the complexity of involved matrix computations, including computing $H_t^{-1}$. Using the likelihood ratio-based CS $\mathcal{C}_t(\delta)$ still results in a convex optimization problem, albeit a bit less efficiently solvable than using $\mathcal{E}_t(\delta)$ due to lack of structure. We also remark that computational issues in high dimensions are present in many prior approaches to linear and logistic bandits [1,4,7], and our approach doesn’t introduce additional complexity relative to them. We emphasize that we primarily focus on achieving the tightest regret guarantee for **GLB**s while being computationally *tractable*. We agree with the reviewers that obtaining jointly statistically tight and computationally *efficient* algorithms (e.g., [3]) is an important future direction but outside this paper’s scope. Pdf: /pdf/999547ef16f7dd30617d184198cc32d4dd042089.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Online Classification with Predictions
Accept (poster)
Summary: The paper studies an online classification problem that interpolates between transductive online classification and (standard) online classification. The interpolation is controlled by a "predictor" entity that tries to predict the examples to be labelled: If the predictor is good, the setting is more like transductive online classification, and otherwise it is more like (standard) online classification. For this problem, the authors design an online learner that uses a predictor and can do better than a standard online learning algorithm if the predictor is good. As a corollary, they show that transductive online learning implies online learning when the input stream is taken from a "predictable" collection of streams. For the first result, they also provide a lower bound that shows that it is nearly tight in some sense. Strengths: 1. The paper presents a new and fairly natural problem in online learning, and provides nearly tight guarantees for it. 2. The idea of using a few learners to cover all cases and then use WM on them is useful and interesting (is it novel?). Weaknesses: 1. Most of the proofs use combinations of known techniques. 2. There is some room for improvements in the writing and presentation. For example, in page 2, I didn't understand the difference between contributions (1) and (2). Technical Quality: 3 Clarity: 2 Questions for Authors: Questions: 1. To provide motivation, it is written that the standard online learning setting is often too pessimistic. However, there are other ways to make it more optimistic. For example, what about a stochastic adversary? It seems like a more natural way to make the setting more optimistic, but it is not discussed in the paper. 2. Why is line 172 "without loss of generality"? 3. I am not quite sure that the story of a "predictor" in the background is the most natural way to present the problem. Isn't the setting you define is just transductive online learning with possible noise in the given examples (which is interesting as well)? 4. A notion of "predictable stream collections" is defined, but the paper does not study it or even just discuss open questions about it. Do we know how a predictable collection of streams looks like? Suggestions: 1. I didn't understand the sentence in lines 155-156. I suggest rephrasing. 2. The weighted majority algorithm is originally due to the seminal work [1]. I would mention it. Typos/english: 1. Line 56 is unclear. 2. Line 145: "to ability to define" References: [1] Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm.Information and computation, 108(2):212–261, 1994. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for noting that we present a new and fairly natural problem in online learning. We will incorporate all suggestions and fix typos in the final camera-ready version. We address each weakness and question below. **Strengths** 2. This technique of constructing a few online learners and running MW on top is not completely new. For example, this technique is used to design agnostic online learning algorithms for multiclass and binary online classification [1, 2]. However, in the context of Online Algorithms with Predictions, we are unaware of any other work that uses this technique. **Weaknesses** 1. Although our techniques may not be groundbreaking, we disagree with the reviewer that our proofs use combinations of existing ones. Since this setting has not been studied before, we argue that new techniques needed to be developed. For example, it is not obvious apriori how to construct the class of Experts used by Algorithm 5 or that this entire approach as a whole leads to a near optimal upper bound. 2. We thank the reviewer for pointing out the ambiguity between contributions (1) and (2). Contribution (1) is mainly aimed at quantifying the minimax rates while Contribution (2) is concerned with characterizing learnability. In particular, Contribution (1) aims to answer: For any hypothesis class $H$ and a predictor $P$, what is the minimax expected number of mistakes/regret when given access to $P$? Here, the goal is to derive mistake and regret bounds in terms of the performance of $P$. Contribution (2) aims to answer: does having access to a Predictor make online learning easier and if so, when? That is, are there classes $H$ that are online learnable when given access to a Predictor but not online learnable otherwise? If so, which classes $H$ become online learnable when given access to a Predictor? We will make this distinction more clear in the final version. **Questions** 1. There are indeed other ways of making the standard online learner setting more optimistic. The "stochastic adversary" mentioned by the reviewer is actually one of them and is known formally as "smoothed online learning" in literature. We do mention smoothed online learning in Line 64-65 and Appendix A as an alternative way of going beyond a worst-case analysis. We will add a larger discussion about the smoothed model in the main text of the final version. 2. Since realizable and agnostic learnability are equivalent, it doesn't matter whether we use either (1) the existence of a learner with sublinear mistake bound or (2) the existence of a learner with sublinear regret to characterize offline learnability in both the realizable or agnostic settings. Lines 172-173 is just saying that we will always pick (2) the existence of a learner with sublinear regret, to characterize offline learnability in both the realizable and agnostic settings. We will remove this phrase in the final version to avoid confusion. 3. Our setting with a Predictor is actually more general than "transductive online learning with possible noise in the given examples." This is because we allow the Predictor to change its predictions about future instances throughout the game. On the other hand, in the noisy transductive online setting, one presumably fixes the noisy instances revealed to the learner before the game begins. We give a brief discussion of this on lines 136-141. In addition, we find that access to a Predictor (and therefore dynamic predictions about future instances) is more natural in practice. For example, one often has the ability to update predictions about future instances given the present instance (e.g. temperature forecasting). 4. One natural predictable collection of streams are those induced by dynamical systems. That is, let $X$ be the state space for a collection of transition functions $G: X \mapsto X$. Then, given an initial state $x_0$, one can consider the stream class $Z$ to be the set of all trajectories induced by transition functions in $G$. We will add a discussion of what predictable stream classes might look like in the camera ready version. [1] Ben-David, Shai, Dávid Pál, and Shai Shalev-Shwartz. "Agnostic Online Learning." COLT. Vol. 3. 2009. [2] Hanneke, Steve, et al. "Multiclass online learning and uniform convergence." The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023. --- Rebuttal Comment 1.1: Comment: Thank you very much for addressing my comments and questions.
Summary: This paper studies the complexity of the online classification problem when provided with a predictor $\mathcal{P}$ that can forecast future features of data samples. Using black-box access to this predictor, the authors provide a beyond-worst-case analysis of online classification algorithms that can adapt to the 'easiness' of the online stream. Specifically, in Theorem 3.1, they demonstrate that there exists an online learner whose mistakes can be upper bounded by the Littlestone dimension, while additionally being bounded by the performance of predictor $\mathcal{P}$ and that of the employed offline learner. As a corollary, their result reveals the relationship between offline learnability and realizable online learnability given such a predictor. A lower bound is provided to demonstrate the tightness of the upper bound. Strengths: 1. The presentation of this paper is clear, and I can quickly grasp the main idea of how to construct an online learner that satisfies the upper bound. 2. This paper explores beyond-worst-case guarantees by leveraging a predictor and an offline learner. With the constructed online protocol, the authors reveal the linkage between offline learnability and online learnability to some extent. 3. The theoretical results appear solid with both the upper bounds and the lower bound, although I have not checked the appendices carefully. Weaknesses: My main concern is about the accuracy of the predictor $\mathcal{P}$. Unlike in the offline setting, where the features $x_{1:T}$ are naturally provided, in the online setting—especially the adversarial one—the predictor $\mathcal{P}$ might not be accurate, and the factor $M_{\mathcal{P}}$ could be extremely large. Also, Algorithm 3 will initialize a new copy every time the predictor makes a mistake. It seems that Algorithm 3 might initialize the copy frequently, and it might not learn the potential pattern even if the mistakes made are minor. Technical Quality: 3 Clarity: 3 Questions for Authors: Could we consider 'soft' criteria when initializing the copy? For example, could the online learner initialize a new copy only when the cumulative errors, such as $\sum_t \|x_t - \hat{x}_t\|$, exceed some threshold? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding the theoretical results to be solid and the presentation to be clear. We address the concerns below. - Indeed, the stream of instances $x_1, ..., x_T$ could be very unpredictable and $M_P$ can be very large. However, in this case the minimum in Theorem 3.1 is obtained by $L(H)$, the Littlestone dimension of $H$. Thus, our algorithm's guarantee is never worse than the worst-case mistake bound (up to constant factors). However, if $M_P$ is very small, then our algorithm's guarantee (see (ii) and (iii) in Theorem 3.1) can be much smaller than $L(H)$. In this way, our algorithm adapts to the quality of predictions made by $P$. - Indeed, if the Predictor makes a lot of mistakes, Algorithm 3 will restart the Offline learner frequently causing its expected number of mistakes to explode. This is precisely why we give Algorithm 5, which explicitly controls the number of restarts. Note that the expected number of mistakes made by Algorithm 5 (see Lemma 3.7) can be significantly smaller than that by Algorithm 3 (see Lemma 3.5), and this is captured by term (iii) in Theorem 3.1. - The "soft" criteria when initializing a new copy is an interesting idea. However, it is not immediately obvious to us that such an idea would work since the Offline learner is only useful when the instances it gets initialized with actually matches the true instances in the stream. In any case, note that the guarantee of Algorithm 5 (see Lemma 3.7) is already near optimal (up to log factors in $T$) given the lower bound in Theorem 3.8. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' replies, which address my concerns. So I decide to raise my score to 6.
Summary: The paper belongs to the field of statistical learning. The learner is seeing pairs $(x,y)$ and needs to predict labels $y$s to compete (in terms of the expected number of errors) with any hypothesis from the given class. The main result of the paper, Theorem 3.1, establishes a connection between the offline and the online setup. In the offline setup, the learner knows the sequence $x_1, x_2, \ldots, x_T$ where it needs to predict labels in advance. In the online setup it gets $x$s one by one but gets help from a predictor $\cal P$ that tries to predict what remains of the sequence. The paper constructs an online algorithm utilising an offline algorithm and a predictor and gets an upper bound on its performance in terms of the quality of the offline bound and the quality of the predictor. Corollary 3.2 shows that if a class of sequences is predictable ( = can be predicted with sublinear cumulative error) and a class of hypotheses is learnable under offline settings (the regret is sublilear), then their combination is learnable under online settings. Theorem 3.8 provides a lower bound. Strengths: I believe the paper presents an interesting result answering an important question. Weaknesses: I am worried the very technical presentation may limit the appeal of the paper. Typos etc: Page 4, Section 2.3, first line: to ability -> the ability Page 4, Section 2.3. I believe the mentioning of the adversary appears here for the first time, and it is confusing. I understand we are talking of the adversary because we are interested in the supremum of the loss over sequences, but I may be wrong... Page 6. Lemma 2.2 says there is _a_ concave function upper-bounding $g$. Then $\bar f$ is defined as _the_ concave function upper-bounding $f$. Which one is used? The minimal perhaps? Technical Quality: 4 Clarity: 3 Questions for Authors: None Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding that our work presents an "interesting result answering an important question." We address the concerns below. - We thank the reviewer for pointing out our use of "adversary" in Page 4 Section 2.3. We will change this to "Nature" to make it consistent with the setup in Section 2.1. - Yes, we take \bar{f} to the smallest concave sublinear function that upper bounds g. We will make this explicit in the camera-ready version.
Summary: This paper studies learning-augmented online classification, where the classifier has access to a predictor that forecasts future examples. The paper proposes an algorithm that uses these predictions, and the algorithm's performance depends on the prediction error. The proposed algorithm is robust (never worse than the worst-case mistake bound), consistent (as good as the best offline learner if the predictions are perfect), and degrades smoothly as prediction quality decreases. Their findings show that accurate predictions can make online learning as straightforward as transductive online learning. The contribution of this paper is theoretical. Strengths: - This is the first paper exploring the task of online classification with predictions. The main result is intuitive and elegant, showing that "online learnability with an effective predictor can be reduced to transductive online learnability." - The theoretical part of this paper is sound and well-organized. The paper effectively analyzes the robustness, consistency, and smoothness of the proposed algorithm, which are key aspects of learning-augmented algorithms. Weaknesses: - The paper does not discuss how the predictions can be generated, particularly from machine learning models. Admittedly, most designs in learning-augmented algorithms focus on the algorithmic side. Yet, linking the proposed algorithm to settings where machine learning models can be involved would strengthen the paper's relevance to the NeurIPS audience. - The paper does not include any experiment or evaluation on how the proposed algorithms would perform in real-world scenario. Similar to the previous point, I think the community studying learning-augmented algorithms value both theoretical soundness and practicality (which is the reason to include learned predictors in the first place). I think some experiments (even on synthetic data) would make this paper more complete. - The paper lacks experiments or evaluations on the proposed algorithms' performance in real-world scenarios. Similar to the previous point, I think the community studying learning-augmented algorithms value both theoretical soundness and practicality (which is the reason to include learned predictors in the first place). Some experiments even on synthetic data showcasing the improved performance of the proposed algorithm would make this paper more comprehensive. Technical Quality: 4 Clarity: 3 Questions for Authors: - I am not an expert in online classification, but are there any public benchmarks or test sets to evaluate the performance of online classification algorithms? If so, I strongly encourage the author to perform experiments to demonstrate the proposed algorithm's performance. - In the third sentence of line 107, should it be $\hat y_t \in \mathcal{Y}$? Otherwise, I am confused about why the prediction is binary. - In lines 377 - 380, do these two citations refer to the same paper? - The notation in the paper is somewhat confusing. Specifically, regarding the predictor, I did not find the formal definition of $\mathcal{P}\left(x_{1: t}\right)$ and $\hat{x}_{1: T}$. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The majority of limitations are addressed in the 'weaknesses' and 'questions' sections. From a broader societal perspective, given the theoretical nature of this work, there is no immediate negative impact, and I cannot foresee any. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for noting that our main result is intuitive and elegant. We address each of the weaknesses and questions below. **Weaknesses** - In this paper, we consider abstract instance spaces $X$. Accordingly, we did not discuss how the predictions can be generated since there is unlikely to be a one-size-fits-all Predictor. That said, for particular choices of the instance space $X$, existing literature in ML can certainly provide methods for generating predictions. For example, if $X$ is the state space for a discrete-time dynamical system, then existing algorithms for next-state predictions for discrete-time dynamical systems can be used to forecast future instances/states. We will make sure to include a concrete example of how predictions may be generated from ML models in the camera ready version. - This paper is mainly focused on the theoretical/conceptual benefits of instance predictions. In particular, an important implication/insight of our results is that the difficulty in adversarial online classification is not due to the uncertainty of the labels, but rather, the uncertainty with respect to future instances. That said, we do agree with the reviewer that experiments on synthetic data showcasing the improved performance of the proposed algorithms is an interesting and important direction. We do note that our algorithms use black-box access to Offline learners and Predictors. Thus, the efficiency of our learning algorithms depend on the efficiency of existing Offline learner and Predictors. To the best of our knowledge, we do not know of any implementation of an Offline learner, let alone an efficient one. In particular, the Offline learner from Hanneke et al. [2023] is provable inefficiency since it requires computing the VC dimension. Finally, we would like to point out that there are several influential papers in learning-augmented algorithms whose contributions were primarily theoretical [1, 2, 3, 4, 5]. **Questions** - Yes, there are public benchmarks where one can evaluate the performance of online classification algorithms (e.g UCI ML Repository). However, these public benchmarks are also used to evaluate batch learning algorithms. Accordingly, it is unclear how to design a Predictor for these datasets since these datasets are not inherently online - it is not known how the instances were generated or whether there is any particular order to the rows in the datasets. - Yes, in the third sentence of line 107, it should be $\hat{y}_t \in Y$. - Yes, these two citations cite the same paper. We will fix this in the final version. - We will make sure to define the Predictor and $\hat{x}^t_{1:T}$ explicitly in the camera-ready version. We use $P(x_{1:t})$ to denote the prediction made by Predictor $P$ after observing $x_1, ..., x_t$. We also use $\hat{x}^t_{1:T}$ to denote these predictions when it is more convenient. We will make sure to define these quantities explicitly. [1] Rohatgi, D. (2020). Near-optimal bounds for online caching with machine learned advice. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms (pp. 1834-1845). Society for Industrial and Applied Mathematics. [2] Dütting, Paul, et al. "Secretaries with advice." Proceedings of the 22nd ACM Conference on Economics and Computation. 2021. [3] Lattanzi, Silvio, et al. "Online scheduling via learned weights." Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 2020. [4] Antoniadis, Antonios, et al. "Secretary and online matching problems with machine learned advice." Advances in Neural Information Processing Systems 33 (2020): 7933-7944. [5] Angelopoulos, Spyros, et al. "Online computation with untrusted advice." arXiv preprint arXiv:1905.05655 (2019). --- Rebuttal Comment 1.1: Comment: Thanks for the authors' replies and clarification. I will maintain my positive rating.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improving robustness to corruptions with multiplicative weight perturbations
Accept (spotlight)
Summary: To achieve robustness against corruptions, most of existing works relying on incorporating some corruptions during the training. In contrast, this paper takes a different perspective and propose a weight-based approach called DAMP to achieve model robustness against corruptions without compromising accuracy on clean examples. This work also studies Sharpness-Aware Minimisation (SAM) methods and point out the relationship between their proposed DAMP and SAM-based methods. Strengths: The paper takes a new perspective compared to existing methods to achieve robustness against corruptions. The proposed method is simple with nearly similar complexity to SGD and achieve competitive results. Theoretical results is provided to justify the hypothesis. The presentation is clear and easy to follow. Weaknesses: There could be a related work [1] focusing on (adversarial) corruptions, where it also approaches the model robustness problem from model weights perspective. I suggest the authors to take a look at [1] and compare with their work. The experimental results relying on only algorithmically corruption (ImageNet-C, CIFAR-C) could be limited. I suggest the author to conduct the results on other corruption (such as Stylised ImageNet, ImageNet-D), natural corruption (such as ImageNet-A, ImageNet-Sketch), and adversarial corruption. I believe this assessment will further improve the experimental results. The results for SAM/ASAM in Tables 1 and 2 are relatively comparable to those for DAMP when considering the effect of accuracy on clean images. As we know, improving accuracy on clean images generally leads to better accuracy on corrupted images. While I understand that this work takes a different approach from SAM/ASAM, I am curious to know what advantages DAMP offers over SAM/ASAM. As the theory based on a simple feedforward neural network, I wonder if the DAMP can improve the robustness on deeper/modern neural networks, such as Resnet-152, EfficientNet, ConvNext, MaxViT? [1] Ha, Hyeonjeong, Minseon Kim, and Sung Ju Hwang. "Generalizable lightweight proxy for robust NAS against diverse perturbations." Advances in Neural Information Processing Systems 36 (2024). Technical Quality: 3 Clarity: 4 Questions for Authors: What is the effectiveness of DAMP on deeper/modern neural networks, such as Resnet-152 EfficientNet, ConvNext, MaxViT? What is the effectiveness of DAMP on other corruption (such as Stylised ImageNet, ImageNet-D), natural corruption (such as ImageNet-A, ImageNet-Sketch), and adversarial corruption? What is the advantage of DAMP over SAM/ASAM? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. "There could be a related work [1] focusing on (adversarial) corruptions, where it also approaches the model robustness problem from model weights perspective...."** Thank you for the suggestion. This is indeed an interesting related paper which proposes a score called CRoZe that can quickly evaluate the robustness to input perturbations of an architecture using only one batch of training data for zero-shot neural architecture search (NAS). We think that the multiplicative weight perturbations (MWPs) of DAMP can also be used inside the calculation of the CRoZe score to evaluate the robustness of an architecture, which could be a good future research direction. We will include this mention of [1] in the Related Works section of our revised paper. **2. "What is the effectiveness of DAMP on deeper/modern neural networks, such as Resnet-152 EfficientNet, ConvNext, MaxViT?"** In our paper, we have demonstrated the effectiveness of DAMP on ResNet50 and ViT-S16 on ImageNet. We now further provide new results with ViT-B16 in our global response (Table D in the PDF). These results together demonstrate that DAMP is capable of improving the robustness of deep and modern neural networks. Furthermore, as modern networks get larger and have bigger learning capacity, we believe that they could benefit from the additional implicit data augmentations/perturbations induced by the random multiplicative weight perturbations of DAMP, even more so than their smaller counterparts since they have the capacity to learn to output the correct predictions even when the inputs are corrupted by these implicit perturbations. **3. "What is the effectiveness of DAMP on other corruption (such as Stylised ImageNet, ImageNet-D), natural corruption (such as ImageNet-A, ImageNet-Sketch), and adversarial corruption?"** Thank you for your suggestions. We provide new results in our global response (Table B and D in the PDF) where we evaluate models trained with DAMP and baseline methods on ImageNet-D [2], ImageNet-A [3], ImageNet-Sketch [4], ImageNet-Cartoon [5], ImageNet-Drawing [5], and adversarial corruptions generated by FGSM [6], demonstrating that models trained with DAMP are more robust to various types of input perturbations than the baseline training methods. **4. "What is the advantage of DAMP over SAM/ASAM?"** As stated in our paper, the main advantage of DAMP over SAM/ASAM is that it requires only one forward-backward pass per training iteration, while SAM/ASAM requires two consecutive forward-backward passes per iteration. Therefore, given the same number of iterations, SAM/ASAM takes twice as long to finish training than DAMP. Furthermore, our experimental results show that on corrupted inputs, DAMP outperforms SAM in the majority of cases and is competitive with ASAM. Finally, our new results in the global response (Table D in the attached PDF) indicate that training a ViT-B16 (86M params) with DAMP leads to better accuracy than training ViT-S16 (22M params) with SAM/ASAM, yet both settings take roughly the same amount of time. Thus given the same training budget, it is better to train a large model with DAMP than to train a smaller model with SAM/ASAM. [1] Ha et al. "Generalizable lightweight proxy for robust NAS against diverse perturbations." NeurIPS 2024. [2] Zhang et al. "ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object". CVPR 2024. [3] Hendrycks et al. "Natural Adversarial Examples". CVPR 2021. [4] Wang et al. "Learning Robust Global Representations by Penalizing Local Predictive Power". NeurIPS 2019. [5] Salvador et al. "ImageNet-Cartoon and ImageNet-Drawing: two domain shift datasets for ImageNet". ICML 2022 Shift Happens Workshop. [6] Goodfellow et al. "Explaining and Harnessing Adversarial Examples". arXiv preprint arXiv:1412.6572, 2014. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer dwU7 Comment: I thank the authors' effort in conducting new experiments, which have made the proposed method more convincing to me. However, the performance of DAMP does not appear to significantly surpass that of ASAM/SAM. I agree with the authors that DAMP generally offers a x2 speedup in iteration training time compared to ASAM/SAM. With the same training iterations (i.e., faster training time), ASAM/SAM still outperforms DAMP. DAMP only (marginally) surpasses ASAM/SAM when it is trained with more iterations, resulting in the same training time. Consequently, I appreciate the authors for their thorough rebuttal and have raised my score to 6. --- Rebuttal 2: Comment: Thank you for the comments and for the increased confidence in the proposed method. We note that there are cases where DAMP does not need more training iterations to surpass ASAM/SAM. For instance, in the ResNet50 / ImageNet experiments (Table 1 in the paper), we use 90 epochs for all training methods, yet DAMP is able to perform competitively with ASAM and surpass SAM. Furthermore, we show in Table D of our global response that training ViT-S16 / ImageNet using 500 epochs of DAMP (which took 111hrs) outperforms 300 epochs of ASAM/SAM (which took 123hrs). Therefore, even in the cases where DAMP requires more iterations to outperform ASAM/SAM, it still ends up requiring less overall runtime than ASAM/SAM.
Summary: This submission tackles the problem of generalization of image classifiers to corrupted images. The paper shows a link between data augmentation as well as previous methods such as Adaptive Sharpness-aware Minimization (ASAM) and multiplicative weight perturbations. Effectively, ASAM works as an adversarial weight multiplication which requires a second gradient pass per batch. The proposed DAMP method relaxes this constraint by using random weight perturbations. DAMP is benchmarked on CIFAR-10/100, TinyImageNet, and ImageNet under common corruptions (and follow-up works) with ResNet and ViT where it often matches or even exceeds the original SAM at lower cost. Strengths: - Good writing. - A theoretical motivation for DAMP is provided (arguably with under-simplified relaxation). - A connection between ASAM and multiplicative weight perturbations is shown. - Good evaluation: DAMP is benchmarked on CIFAR-10/100, TinyImageNet, and ImageNet under common corruptions (and follow-up works) with ResNet and ViT. It is also compared to SAM and ASAM. All experiments (except ViT) use multiple runs! - DAMP outperforms SAM in most cases at a lower cost; it allows to training of ViTs without excessive augmentation techniques. - DAMP is theoretically domain-agnostic and could be applied beyond image classification or vision. Weaknesses: My only concern about this work is the contextualization of this work. It seems like DAMP is mostly an approximation of ASAM which allows faster training (and that is great) but at the same time, it is not as effective as ASAM. So effectively, this method introduces a trade-off between performance (accuracy) and training time. This raises the following detailed concerns: - If multiplicative weight perturbations are connected to data augmentation then how does DAMP rank against them? Augmentations are mostly cheap so they could achieve even better performance at the same overhead cost. One indication of this is in the ViT experiments on ImageNet: DAMP achieves a 23.7% clean error in 200 epochs without augmentations. This may sound impressive at first, however, the baseline that the authors rely on (Beyer, 2022) even exceeds this (23.5% error) in just 90 epochs. The only difference is that (Beyer, 2022) utilizes some simple augmentations. That is only clean accuracy but usually, it is indicative of performance under distribution shift [1]. Ideally, the authors would demonstrate that on a train time vs. mCE or clean acc curve, DAMP exceeds the current Pareto frontier (by benchmarking a few modern augmentation techniques in addition to SAM/ASAM). - Additionally, it would be great to understand if DAMP is compatible with augmentations. I don't want to use the clearly stated Limitations against the authors but this is a very straightforward question that I had during reading. Other than that the theory in Eq.1ff assumes bias-free MLP architectures without non-linearities which obviously does not scale to modern architectures. (The authors mention this in their limitations and this is not something that needs to be addressed, but it is a weakness nonetheless) Nitpicks: - L118: I agree that random perturbations introduce almost no overhead, but you shouldn't state "we found that ... had similar training times" in a scientific paper without backing it up with hard numbers. - It would be great to add averages over all severities in Tab. 1 and 2. Suggestion: - A simple way to kill all my concerns is to show performance on non-vision tasks (since DAMP is in theory data-agnostic). In that case, it would exceed the value of domain-specific augmentations. This would greatly enhance this paper but of course, I do not expect this from a rebuttal. [1] Taori et al., "Measuring Robustness to Natural Distribution Shifts in Image Classification", NeurIPS 2020. Technical Quality: 3 Clarity: 3 Questions for Authors: See above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors are very upfront about Limitations and I have no other suggestions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. "...It seems like DAMP is mostly an approximation of ASAM which allows faster training (and that is great) but at the same time, it is not as effective as ASAM. So effectively, this method introduces a trade-off between performance (accuracy) and training time..."** There is a misunderstanding here. DAMP is actually **not an approximation of ASAM** and can even outperform ASAM. The objective function of DAMP (Eq. 16 in our paper) is: $$ \mathcal{L}\_{\mathrm{DAMP}}(\boldsymbol{\omega}; \mathcal{D}) = \mathbb{E}\_{\boldsymbol{\xi}\sim\mathcal{N}(\boldsymbol{1},\sigma^2\mathbf{I})}\left[\mathcal{L}(\boldsymbol{\omega}\circ\boldsymbol{\xi}; \mathcal{D})\right] + \frac{\lambda}{2}||\boldsymbol{\omega}||\_2^2$$ while the objective function of ASAM (Eq. 21 in our paper) can be written as: $$\mathcal{L}\_{\mathrm{ASAM}}(\boldsymbol{\omega};\mathcal{D})=\max\_{||\boldsymbol{\xi}||\_2 \leq \rho}\mathcal{L}(\boldsymbol{\omega}\circ(1+\boldsymbol{\xi});\mathcal{D})+\frac{\lambda}{2}||\boldsymbol{\omega}||\_2^2$$ where $\boldsymbol{\omega}$ is the model's parameters, $\mathcal{D}$ is the training data, $\mathcal{L}$ is the loss function such as cross-entropy, and $\circ$ denotes the Hadamard product. Thus DAMP minimizes the **expected loss** under multiplicative weight perturbations (MWPs), while ASAM minimizes the loss under the **worse-case MWP**. This is a crucial distinction, since minimizing the expectation allows us to devise an efficient version of DAMP (Algorithm 1 in the paper), while ASAM needs two forward-backward passes for each iteration (the first one to approximate the worse-case MWP, the second one is to minimize the loss under such MWP). Thus given the same number of training iterations, ASAM takes **twice as long** to train as DAMP. In Table D of the PDF attached to our global response, training a ViT-B16 (86M params) with DAMP leads to better accuracy than training ViT-S16 (22M params) with ASAM, yet both settings take roughly the same amount of time. Thus given the same training budget, it is better to train a large model with DAMP than to train a smaller model with ASAM. **2. "If multiplicative weight perturbations are connected to data augmentation then how does DAMP rank against them?..."** **"Additionally, it would be great to understand if DAMP is compatible with augmentations..."** Modern data augmentations (DAs) (such as MixUp, RandAugment) are cheap and contain informative prior knowledge, and thus they could greatly improve performance of a model. However, they are specific to computer vision and not applicable to other domains like natural language processing. On the other hand, multiplicative weight perturbations (MWPs) are less informative than DAs but they are domain-agnostic and thus can be applied to a wide range of tasks. For instance, DAMP improves the test accuracy of Mistral-7B finetuned by LoRA on the MedMCQA dataset from 49.05% to 50.25%, as shown in Table A in the PDF attached to the global response. Furthermore, since MWPs operate on the weight space while DAs operate on the input space, they can be combined together to further enhance performance, as shown in Table D of the global response combining DAMP with MixUp and RandAugment to train ViT-S16 and ViT-B16. **3. "the theory in Eq.1ff assumes bias-free MLP architectures without non-linearities which obviously does not scale to modern architectures."** There is a misunderstanding here. In our theoretical analysis, we use a bias-free MLP architectures with **non-linear activations**, as shown in Eq. 1-3 in our paper. It would be pointless for us to analyze an MLP without non-linear activations since such model is equivalent to a simple one-layer linear model. We note that other papers (e.g. [1, 2]) also assume a bias-free MLP for their analyses. Finally, we want to note that Theorem 1 in our paper motivates the design of the DAMP algorithm which, as we demonstrate through our large-scale ImageNet experiments, does scale well to modern architectures. **4. "I agree that random perturbations introduce almost no overhead, but you shouldn't state "we found that ... had similar training times" in a scientific paper without backing it up with hard numbers."** **"It would be great to add averages over all severities in Tab. 1 and 2."** Thank you for your suggestions. We now added the runtimes and the results averaged over severities for each setting in our global response (Table B and D), and will include them in the revised paper. **5. "A simple way to kill all my concerns is to show performance on non-vision tasks (since DAMP is in theory data-agnostic). In that case, it would exceed the value of domain-specific augmentations. This would greatly enhance this paper but of course, I do not expect this from a rebuttal."** Thank you for your suggestions. We now added results of using DAMP with LoRA to finetune a Mistral-7B on the MedMCQA dataset in our global response (Table A), demonstrating that it improves the test accuracy from 49.05% to 50.25%. [1] Dusenberry et al. "Efficient and scalable Bayesian neural nets with rank-1 factors." ICML, 2020. [2] Andriushchenko et al. "Sharpness-Aware Minimization Leads to Low-Rank Features". NeurIPS, 2023. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. The newly added experiments address my concerns - I'd suggest including them in the paper. I am raising my score to 6. --- Rebuttal 2: Comment: Thank you for the comments. We will definitely include these new results in the revised version of the paper.
Summary: The paper presents Data Augmentation via Multiplicative Perturbations (DAMP), a novel method to enhance DNN robustness against image corruptions by optimizing with random multiplicative weight perturbations. This approach improves generalization on corrupted images without compromising accuracy on clean ones. The authors show that input perturbations can be mimicked by multiplicative perturbations in the weight space. The authors demonstrate DAMP's effectiveness across multiple datasets and architectures, and explore its connection to Adaptive Sharpness-Aware Minimization (ASAM). Strengths: The paper presents a novel perspective on enhancing neural network robustness by leveraging multiplicative weight perturbations. The experiments are comprehensive and well-designed, covering multiple datasets and model architectures. The results consistently show that DAMP improves robustness against a wide range of corruptions without sacrificing performance on clean images. The method achieves these improvements without incurring additional computational costs. Weaknesses: It would be beneficial to see how the method performs with different hyperparameter values, as the reported numbers for different metrics are close to each other. Assumption 2 is not explained very well and could benefit from a clearer, more detailed explanation. While DAMP works well for small models and datasets, it would be interesting to see the results with larger models and datasets, as it seems to show some instability in results on these larger models and datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: Why is DAMP trained for 200 epochs compared to other methods trained for 100 epochs in Table 2? How does the model perform when tested on other types of distribution shifts beyond those included in the corruption benchmarks? How do the methods perform without basic Inception-style preprocessing? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The results could be better explained, particularly why the accuracy on clean images and lower severity corruptions is lower than the ASAM method in larger-scale experiments for ResNet/ImageNet. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. "It would be beneficial to see how the method performs with different hyperparameter values, as the reported numbers for different metrics are close to each other."** Thank you for your suggestion. We provide Table C in the PDF file attached to our global response showing the accuracy of ViT-S16 on ImageNet under different variances of the random multiplicative weight perturbations of DAMP. Specifically, as we increase the perturbation variance, the accuracy improves up to a maximum value then degrades afterwards. We will include this table in the revised paper. **2.  "Assumption 2 is not explained very well and could benefit from a clearer, more detailed explanation"** Assumption 2 states that the gradient with respect to the input $\mathbf{x}$ of the loss function $\ell(\boldsymbol{\omega}, \mathbf{x}, y)$ is Lipschitz continuous. This simply means that there exists a constant $C > 0$ such that for all $\mathbf{x}\_1, \mathbf{x}\_2$, we have: $$||\nabla\_{\mathbf{x}}\ell(\boldsymbol{\omega},\mathbf{x}\_1,y)-\nabla\_{\mathbf{x}}\ell(\boldsymbol{\omega},\mathbf{x}\_2,y)||\_2^2\leq C||\mathbf{x}\_1-\mathbf{x}\_2||\_2^2$$ We will clarify assumption 2 as shown here in the revised paper. **3. "While DAMP works well for small models and datasets, it would be interesting to see the results with larger models and datasets, as it seems to show some instability in results on these larger models and datasets."** Perhaps there is a misunderstanding here. We have shown that DAMP works well on both ResNet50 and ViT-S16 on ImageNet in our paper, and we experienced no instability when using DAMP to train these models. In our new experiments (see Table D in the PDF file of our global response), we use DAMP to train a ViT-B16, a larger version of ViT-S16, showing that DAMP also leads to improved performance in this case. Thus we believe that DAMP is perfectly capable of training large models on large datasets. **4. "Why is DAMP trained for 200 epochs compared to other methods trained for 100 epochs in Table 2?"** In Table 2, we compare models trained with DAMP and AdamW for 200 epochs to models trained with SAM and ASAM for 100 epochs. This is because SAM and ASAM require two forward-backward passes per training iteration, while DAMP and AdamW require only one forward-backward pass per iteration. Therefore, the time it takes to run 200 epochs of DAMP is roughly the same as 100 epochs of SAM/ASAM. We have already emphasized this in the paper but will add a further comment. We also have included the runtime of these experiments in Table B and Table D of the PDF attached to our global response. **5. "How does the model perform when tested on other types of distribution shifts beyond those included in the corruption benchmarks?"** We provide new results in Table B and Table D in the PDF of our global response where we evaluate models trained with DAMP and baseline methods on ImageNet-D [1], ImageNet-A [2], ImageNet-Sketch [3], ImageNet-Cartoon [4], ImageNet-Drawing [4], and adversarial corruptions generated by FGSM [5], demonstrating that models trained with DAMP are more robust to various types of input perturbations than the baseline training methods. **6. "How do the methods perform without basic Inception-style preprocessing?"** We provide new results in Table D in the PDF of our global response where we train ViT-S16 and ViT-B16 with MixUp and RandAugment, showing that DAMP is able to work in tandem with these modern data augmentations to further enhance model performance. **7. "The results could be better explained, particularly why the accuracy on clean images and lower severity corruptions is lower than the ASAM method in larger-scale experiments for ResNet/ImageNet."** As we explain in Section 3 of the paper, ASAM actually minimizes the training loss under adversarial multiplicative weight perturbations (MWPs), as shown by the inner maximization over the MWPs $\boldsymbol{\xi}$ in its objective function (Eq. 21 in the paper): $$\mathcal{L}\_{\mathrm{ASAM}}(\boldsymbol{\omega};\mathcal{D})=\max\_{||\boldsymbol{\xi}||\_2 \leq \rho}\mathcal{L}(\boldsymbol{\omega}\circ(1+\boldsymbol{\xi});\mathcal{D})+\frac{\lambda}{2}||\boldsymbol{\omega}||\_2^2$$ and thus it is also able to produce robust models. This is why Table 1 in the paper (ResNet50 / ImageNet) shows that ASAM is able to outperform DAMP on clean images and on corruption severity levels 1, 2 and 3 of ImageNet-C. However, Table 1 also shows that DAMP outperforms ASAM on severity levels  4 and 5 of ImageNet-C as well as on all 5 severity levels of ImageNet-$\bar{\mathrm{C}}$. Furthermore, as we have stated above, the training time of DAMP is half that of ASAM, and since we use 90 epochs for all experiments in Table 1, this means that DAMP is able to outperform ASAM on the majority of test sets while being more efficient. We will add a further pointer to these for clarity. [1] Zhang et al. "ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object". CVPR 2024. [2] Hendrycks et al. "Natural Adversarial Examples". CVPR 2021. [3] Wang et al. "Learning Robust Global Representations by Penalizing Local Predictive Power". NeurIPS 2019. [4] Salvador et al. "ImageNet-Cartoon and ImageNet-Drawing: two domain shift datasets for ImageNet". ICML 2022 Shift Happens Workshop. [5] Goodfellow et al. "Explaining and Harnessing Adversarial Examples". arXiv preprint arXiv:1412.6572, 2014. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications provided in your rebuttal. After reviewing the rebuttal and considering other comments, I will maintain my current rating. Including SAM/ASAM+ViT-B16 in Table D would further improve the paper. --- Reply to Comment 1.1.1: Comment: Thank you for your comments. We will definitely include results of SAM/ASAM+ViT-B16 in the revised version of the paper.
Summary: This paper works on improving robustness by multiplying a random Gaussian variable on weights during training. Strengths: The writing is easy to read and follow. Weaknesses: 1. The novelty is quite limited. This concept has been proposed and explored for at least a decade. For instance, variational dropout already suggested this approach, using a Bernoulli distribution. Another example can be found in [1], "Multiplicative or Additive Perturbation?", where they used a Gaussian distribution. 2. The theoretical foundation is weak. The theory section consists of trivial analysis by adding constrained corruption to inputs, which is not significantly related to this work. Reference: 1. Dusenberry, Michael, et al. "Efficient and scalable Bayesian neural nets with rank-1 factors." International Conference on Machine Learning. PMLR, 2020. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What is the novelty of this work? 2. How does DAMP compare to modern augmentation techniques such as those used in DeiT and subsequent works? DAMP's augmentation does not appear to be simpler than these methods. Furthermore, DAMP is evidently more unstable, particularly in large-scale training, due to the stochasticity introduced during training, a common issue observed in many Bayesian Neural Network (BNN) related works. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See the above sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. On the novelty of this work and its connections to previous works:** The novel contributions are: 1. Showing the theoretical connection between perturbations in the input space and perturbations in the weight space, in particular, that one can simulate perturbations in the input space via multiplicative weight perturbations. 2. From this connection, we propose a new method, Data Augmentation via Multiplicative Perturbations (DAMP), which multiplies each individual weight of a network with its own Gaussian random variables during training. 3. We provide a new theoretical interpretation of Adaptive Sharpness Aware Minimization (ASAM) [3], a variant of SAM [2], that ASAM actually minimizes the loss under adversarial *multiplicative* weight perturbation as opposed to SAM which minimizes the loss under adversarial *additive* perturbations. 4. We show that DAMP works well with modern architectures as demonstrated by our large-scale experiments on training ResNet50 and ViT-S16 on ImageNet. The differences between DAMP and previous works like Dropout and its variants are: 1. Dropout multiplies *all the weights connecting to the same input node* with a *random Bernoulli variable*, which has the disadvantage that it reduces the update frequency of the weights during training (since dropped weights receive no gradient update from backpropagation). On the other hand, DAMP multiplies *each individual weight* with *its own random Gaussian variable*, which perturbs the weights while allowing them to receive non-zero gradients at each iteration. 2. Dropout focuses on improving generalization on clean test data, while DAMP focuses on improving generalization on corrupted test data. 3. The work in [1] multiplies *all the weights connecting to the same input node* with a *Gaussian random variable*. Their motivation is to introduce Rank-1 BNNs, an efficient Bayesian neural network (BNN), and they attribute the improvements on corrupted data to the epistemic uncertainty of the Rank-1 BNNs. Our work demonstrates that *multiplicative weight perturbations* are actually the main reason for the improved robustness. While the concept of weight perturbations is not new, to the best of our knowledge, no earlier work specifically studies this technique to robustify models to input corruptions. Here we shed light on a simple algorithm that could be applied to any training setting to enhance model robustness with zero additional cost, and we believe that this new knowledge is beneficial to the machine learning community. **2. "The theoretical foundation is weak. The theory section consists of trivial analysis by adding constrained corruption to inputs, which is not significantly related to this work."** We appreciate your perspective on our theoretical analysis. However, we believe that our analysis, while simple, provides a direct link between input corruptions and multiplicative weight perturbations. Specifically, Theorem 1 proves that the training loss under input corruptions is upper bounded by the training loss under multiplicative weight perturbations plus an $L2$-regularization term. This connection serves as the foundation for our DAMP algorithm, offering a clear rationale for its design and effectiveness. We also note that many analyses, when presented in a clear and concise manner, may seem obvious in hindsight. **3. Comparing DAMP to modern augmentation techniques:** The modern augmentation techniques directly modify the inputs in the input space, while DAMP perturbs the weights in the weight space to simulate input corruptions. As they operate on two different spaces, DAMP and the augmentation techniques can be combined together to further enhance corruption robustness. We demonstrate this idea with new experiments training the ViT-S16 and ViT-B16 on ImageNet with MixUp and RandAugment (Table D in the PDF of our global response), showing that DAMP works in tandem with these augmentations to boost robustness. DAMP is simpler than these augmentation techniques since it is domain-agnostic and can be applied to any tasks and any neural network architectures. This is actually an advantage as demonstrated through our new experiments finetuning Mistral-7B on MedMCQA dataset using DAMP and LoRA (Table A in the PDF of our global response). By applying DAMP on the low-rank weights, we improve the test accuracy on the MedMCQA from 49.05% to 50.25%. **4. "DAMP is evidently more unstable, particularly in large-scale training, due to the stochasticity introduced during training..."** We respectfully disagree. DAMP multiplies each individual weight with its own Gaussian random variable with distribution $\mathcal{N}(1, \sigma^2)$, and thus we can control the strength of the perturbations via the variance $\sigma^2$. Of course, if we set $\sigma^2$ too high then the training cannot converge due to high stochasticity, while setting $\sigma^2$ too low reduces the effectiveness of the perturbations. In this sense, our method is no different than other regularization techniques, as setting the dropout rate or the weight decay coefficient too high also prevents convergence. In our experiments, we never observe any instability when the variance is properly tuned using a validation set, and we were able to successfully train ResNet50, ViT-S16, and ViT-B16 on ImageNet, as well as finetuning Mistral-7B. Finally, we note that earlier studies such as [4] showed that introducing stochasticity in neural network training acts as regularization which improves generalization. [1] Dusenberry et al. "Efficient and scalable Bayesian neural nets with rank-1 factors." ICML, 2020. [2] Foret et al. “Sharpness-Aware Minimization for Efficiently Improving Generalization”. ICLR, 2021 [3] Kwon et al. “ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks”. ICML, 2022 [4] Keskar et al. “On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima”, ICLR 2017. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. Unfortunately, it does not address my concerns regarding this paper. Is the technique new? No. This is simply variational dropout (VD). While the authors cite the paper by Kingma et al. (2015) in the related work section, their comments are somewhat misleading. The VD series of works has already established that traditional dropout involves multiplying Bernoulli variables, and that these can also be Gaussian variables, which is exactly what this work does. However, the authors only cite Kingma’s paper but overlook this point, merely mentioning that dropout involves multiplying a Bernoulli variable. It's well-established that this is just a variant of dropout. Is using dropout to improve robustness new? Certainly not. A simple search for "dropout improve robustness" yields numerous papers and blog posts on this topic. Even for variational dropout, this approach isn't novel. Variational dropout is a standard technique in training Bayesian neural networks (BNNs), where testing robustness is a common metric, as BNNs are well-known for their enhanced robustness. Is a weight-based approach to improve robustness new? No. Dropout itself can be considered a weight-based approach to improving robustness, and it is standard practice to combine dropout with other techniques applied to inputs. Just using Mixup and Randaugment seems not to be a strong baseline. What about stronger baseline such as the commonly-used DeiT [1]? Actually they mentioned that dropout hurts performance in their settings. 1. Training data-efficient image transformers & distillation through attention --- Reply to Comment 1.1.1: Comment: Thank you for your comments. **1. On the differences between DAMP, Variational Dropout (VD) and Dropout** We need to emphasize that there is an important difference between DAMP, VD and Dropout. More details below: While it is true that DAMP is closely related to the family of Dropout methods (which includes VD) as we have stated in the Related works section of our paper, we also highlight one crucial difference: DAMP **multiplies each individual weight with its own Gaussian random variable (RV)** $\mathcal{N}(1, \sigma^2)$, while VD and Dropout **share the same multiplicative RV for all weights connecting to the same input node** (VD uses Gaussian RVs while Dropout uses Bernoulli RVs). This difference is very important because we have shown that DAMP **can train a ViT-S16 from scratch on ImageNet to match performance of a ResNet50 without strong data augmentions**, while Dropout is unable to do so as stated in Section 3 of [1] which we directly quote below: > Existing works report that ViTs yield inferior accuracy to the ConvNets of similar size and throughput when trained from scratch on ImageNet without the combination of those advanced data augmentations, despite using various regularization techniques (e.g., large weight decay, Dropout (Srivastava et al., 2014), etc.). This phenomenon suggests that the weight perturbations of DAMP are more expressive than VD and Dropout. The reason we did not mention VD further in our rebuttal is because we already mentioned [2] which is a more recent work that uses Gaussian multiplicative weight perturbations. We also emphasize that this work also **shares the same multiplicative Gaussian RVs for all weights connecting to the same input node** just like VD and therefore is different from DAMP. Regarding Bayesian neural networks (BNNs), which you mention as the reason why VD improves robustness, they achieve robustness by ensembling multiple predictions from samples drawn from their weight posteriors via Bayesian model averaging [3]. Thus at test time, BNNs need to make multiple forward passes to make a prediction on each test input. Our method directly improves the robustness of deterministic neural networks, which only need one forward pass to make a prediction on each test input. [1] Chen et al. "When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations". ICLR 2022. [2] Dusenberry et al. "Efficient and scalable Bayesian neural nets with rank-1 factors." ICML, 2020. [3] Wilson et al. “Bayesian Deep Learning and a Probabilistic Perspective of Generalization.” arXiv:2002.08791. 2022 **2. On the novelty of our paper** In addition to introducing a new training method (DAMP), the novelty of our work also includes providing a new perspective on Adaptive Sharpness Aware Minimization (ASAM) [4], showing that it optimizes neural networks under adversarial **multiplicative** weight perturbations, while its predecessor Sharpness Aware Minimization (SAM) [5] optimizes under adversarial **additive** weight perturbations. This explains why both DAMP and ASAM outperform SAM on corrupted test sets as we showed in our paper. Furthermore, while we agree that Dropout-like weight-based perturbation approach to robustness is not new, most prior works we could find only consider Bernoulli random variables. This is also evidenced by the fact that the training recipes of large vision models such as ViT only consider using Dropout and not the alternatives. Here we shed light on DAMP which is possibly a better alternative of Dropout and we demonstrate that DAMP greatly improves performance and robustness of ViT, regardless of whether strong data augmentation techniques are applied. [4] Kwon et al. “ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks”. ICML, 2022 [5] Foret et al. “Sharpness-Aware Minimization for Efficiently Improving Generalization”. ICLR, 2021 **3. "Just using Mixup and Randaugment seems not to be a strong baseline. What about stronger baseline such as the commonly-used DeiT ? Actually they mentioned that dropout hurts performance in their settings."** We use MixUp and RandAugment following the training recipe from [6] and we believe it is not a weak baseline since it contains half the augmentation techniques used by DeiT (DeiT uses MixUp, RandAugment, CutMix and random erase). In fact, the left panel in Figure 4 of [6] shows that with only MixUp and RandAugment, ViT-B16 can reach up to 83% test accuracy on ImageNet, which is similar to the 83.1% test accuracy achieved by DeiT-B (Table 7 of [7]). Due to the differences between DAMP and Dropout stated above, the fact that Dropout hurts performance of DeiT does not imply that DAMP would as well. [6] Steiner et al. "How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers". TMLR 2022. [7] Touvron et al. "Training data-efficient image transformers & distillation through attention". ICML 2021. --- Rebuttal 2: Comment: I appreciate the authors for their prompt response. It appears that DMAP is more similar to variational DropConnect [1], as it involves multiplying random variables with weights rather than activations. This approach also seems akin to variational inference in Bayesian Neural Networks (BNNs), where independent Gaussian variables are multiplied or added to weights in each forward process in practice, as seen in recent methods like SSVI [2]. I would be inclined to support acceptance if the authors could clarify these 2 points further in future revisions. This work seems to provide valuable guidance and aims to scale these techniques to larger models and datasets, with a specific benefit in robustness. 1. Rimella, Lorenzo, and Nick Whiteley. "Dynamic Bayesian Neural Networks." arXiv preprint arXiv:2004.06963 (2020). 2. Li, Junbo, et al. "Training Bayesian Neural Networks with Sparse Subspace Variational Inference." arXiv preprint arXiv:2402.11025 (2024). --- Rebuttal Comment 2.1: Comment: Thank you for your response. **1. On the connection between DAMP and variational inference methods such as variational DropConnect [1] and SSVI [2]** DAMP indeed could be interpreted as performing variational inference. This can be seen from the objective function of DAMP (Eq. 16 in our paper), which we restate below: $$\mathcal{L}\_{DAMP}(\boldsymbol{\omega};\mathcal{S})=\underbrace{\mathbb{E}\_{\boldsymbol{\xi}\sim p(\boldsymbol{\xi})}\left[\mathcal{L}(\boldsymbol{\omega}\circ\boldsymbol{\xi};\mathcal{S})\right]}\_{\text{expected loss}} + \underbrace{\frac{\lambda}{2}||\boldsymbol{\omega}||\_F^2}\_{\text{$L\_2$-regularization}}$$ where: * $\boldsymbol{\omega}$ is the model's weights. * $\boldsymbol{\xi}$ is the vector of the multiplicative random variables with the same dimension as $\boldsymbol{\omega}$ whose distribution is $p(\boldsymbol{\xi})=\mathcal{N}(\boldsymbol{1},\sigma^2\mathbf{I})$. * $\circ$ denotes the Hadamard product. * $\mathcal{S}$ is the training data. * $||\cdot||\_F^2$ denotes the Frobenius norm. * $\mathcal{L}$ is **the original loss function which is the negative log-likelihood loss** for classification tasks. Therefore, the **expected loss** in the above equation can be interpreted as the expected log-likelihood term of the evidence lower-bound (ELBO) (the loss function used in variational inference) where the variational distribution is the distribution of the multiplicative random variables $p(\boldsymbol{\xi})$. In fact, since we don't optimize the variational distribution $p(\boldsymbol{\xi})$ and keep it fixed at $\mathcal{N}(\boldsymbol{1},\sigma^2\mathbf{I})$ throughout training, **minimizing the loss function $\mathcal{L}\_{DAMP}$ is equivalent to minimizing the negative ELBO plus an $L\_2$-regularization term** (since the KL divergence typically presented in the ELBO has no effect on training when we don't optimize the variational distribution.) From this perspective, DAMP is indeed related to [1] and [2] as you have suggested. Furthermore, our work presents directions to scale these variational approaches to large models and datasets, as you have concluded above. We will include the main points of this discussion in the revised version of our paper. We hope that this response is helpful to further understand our method. [1] Rimella, Lorenzo, and Nick Whiteley. "Dynamic Bayesian Neural Networks." arXiv preprint arXiv:2004.06963 (2020). [2] Li, Junbo, et al. "Training Bayesian Neural Networks with Sparse Subspace Variational Inference." arXiv preprint arXiv:2402.11025 (2024).
Rebuttal 1: Rebuttal: We want to thank the reviewers for their time and for providing us with thoughtful comments which help us improve our work. In this global response, we first provide a brief summary of our paper. We then present additional experiment results, which are included in the attached PDF file. ## Paper summary: The aim of our paper is to introduce a novel, efficient training method that enhances neural network (NN) robustness against input perturbations without incurring additional costs. We begin by presenting theoretical analyses demonstrating that input space corruptions can be simulated using multiplicative weight perturbations (MWPs) in the weight space. We argue that training under these simulated corruptions would allow models to become more robust. From these insights, we propose Data Augmentation via Multiplicative Perturbations (DAMP), which optimizes NNs under random Gaussian MWPs. At each training iteration, DAMP multiplies **each individual weight with its own random variable $\mathcal{N}(1, \sigma^2)$**. Additionally, we offer a new perspective on ASAM [1], revealing that it also produces robust models by optimizing under adversarial MWPs. Notably, while DAMP maintains the same training time as standard optimizers like Adam, ASAM *doubles* this duration. Our experimental results, conducted on both small datasets (CIFAR, TinyImageNet) and large-scale scenarios (ResNet50 and ViT-S16 on ImageNet), demonstrate that DAMP outperforms baselines in robustifying neural networks against a wide range of corruptions. ## New results: The additional results in the PDF file demonstrate the following properties of DAMP: * Domain-agnostic: DAMP can be used in non-computer-vision domains as it can be used to finetune a large language model. * Compatible with modern augmentations: DAMP can be combined with modern data augmentations to further enhance robustness. * Capable of training large models: DAMP has no problem training ViT-B16 on ImageNet and achieves better results than AdamW. * Same training costs as standard optimizers: We back up the claim that DAMP and standard optimizers like AdamW have the same training time by providing concrete numbers. * Capable of improving robustness to various corruptions: We evaluate DAMP on additional types of corruptions including adversarial corruptions to verify that it is indeed effective on a wide range of corruptions. We believe that these results will resolve all of the concerns that you had. Below, we outline the contents of the PDF file: - Table A includes the results of using DAMP with LoRA [2] to finetune Mistral-7B [3] on the MedMCQA dataset [4]. These results show that DAMP consistently improves the test accuracy of the finetuned models, which answer concern of Reviewer **Vu1M** about the benefit of DAMP on non-computer-vision tasks, as well as concerns of Reviewer **XEeL** and **Vu1M** about the advantages of DAMP over domain-specific augmentations like MixUp [5] and RandAugment [6]. - Table B extends the results of ViT-S16 / ImageNet with basic Inception-style augmentations in our paper with additional evaluation results on ImageNet-D [7], ImageNet-A [8], ImageNet-Sketch [9], ImageNet-Cartoon [10], ImageNet-Drawing [10], and adversarial corruptions generated by FGSM [11]. These results show that DAMP produces the most robust model on average, which address the concerns of Reviewer **dM3x** and **dwU7** about robustness of DAMP to other types of input perturbations. - Table B also includes the results averaged over all severity levels of ImageNet-C and ImageNet-$\bar{\mathrm{C}}$ and the runtime of each experiment as requested by Reviewer **Vu1M**. These runtimes show that ASAM and SAM indeed take roughly twice as long to train than DAMP and AdamW, and that DAMP and AdamW have the same runtime. - Table C shows the behavior of DAMP in the ViT-S16 / ImageNet experiments with basic Inception-style augmentations under different standard deviations $\sigma$ of the Gaussian random multiplicative weight perturbations (MWPs) as requested by Reviewer **dM3x**. These results show that as $\sigma$ increases, the accuracy of DAMP improves up to its maximum value and then degrades afterwards. - Table D presents results of DAMP and baselines when training ViT-S16 and ViT-B16 on ImageNet with MixUp [5] and RandAugment [6], demonstrating that: (i) DAMP can be combined with modern data augmentations (DAs) to further enhance robustness; (ii) DAMP is capable of training large models like ViT-B16; (iii) given the same amount of training time, it is better to train a large model (ViT-B16) using DAMP than to train a smaller model (ViT-S16) using SAM/ASAM. This addresses the concerns of Reviewer **dwU7** and **Vu1M** regarding the advantages of DAMP over SAM/ASAM, and concern of Reviewer **Vu1M** about the compatibility of DAMP with modern DA techniques. [1] Kwon et al. "ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks". ICML 2022. [2] Hu et al. "LoRA: Low-Rank Adaptation of Large Language Models". ICLR 2022. [3] Jiang et al. "Mistral-7B". arXiv preprint arXiv:2310.06825, 2023. [4] Pal et al. "MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering". CHIL 2022. [5] Zhang et al. "mixup: Beyond Empirical Risk Minimization". ICLR 2018. [6] Cubuk et al. "RandAugment: Practical automated data augmentation with a reduced search space". NeurIPS 2020. [7] Zhang et al. "ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object". CVPR 2024. [8] Hendrycks et al. "Natural Adversarial Examples". CVPR 2021. [9] Wang et al. "Learning Robust Global Representations by Penalizing Local Predictive Power". NeurIPS 2019. [10] Salvador et al. "ImageNet-Cartoon and ImageNet-Drawing: two domain shift datasets for ImageNet". ICML 2022 Shift Happens Workshop. [11] Goodfellow et al. "Explaining and Harnessing Adversarial Examples". arXiv preprint arXiv:1412.6572, 2014. Pdf: /pdf/dc826fbb4d1231ef48f1f6b219597cc4704264c1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Scaling Sign Language Translation
Accept (poster)
Summary: This paper proposes to improve the open-domain sign language translation by scaling pretraining data, model size, and number of translation directions. The proposed approach involves pretraining with a mixture of noisy multilingual YouTube SLT data, parallel corpora, and SLT data augmented with MT models. Experiments based on (m/By)T5 models show substantial improvements over the baselines on several benchmarks, surpassing the previous state-of-the-art (SOTA) by wide margins. Strengths: The paper is well-written and provides many insights. The problem is that improving the sign language translation performance in open-domain settings is important. Scaling model size, number of languages, and data size lead to impressive performance based on (m/By)T5 models. Weaknesses: 1. In Figure 7, the legend is blocked. 2. There is no open-source code or model, and reproducing this work requires substantial resource expenditure. Technical Quality: 4 Clarity: 4 Questions for Authors: In the zero-shot ASL-to-X translation, does the model encounter similar issues seen in zero-shot text translation, such as translations in wrong languages? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The author does not release the code or model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments! **Re: In Figure 7, the legend is blocked.** We will fix it in the next version. **Re: There is no open-source code or model** While we can’t release the code and model subject to policy restriction, we believe our findings and scaling results could shed light on the development of SLT research and inspire more follow-up studies. **Re: does the model encounter similar issues seen in zero-shot text translation, such as translations in wrong languages?** This is a great question! We checked the translations in zero-shot directions and have some interesting results. We evaluated the language accuracy and empty rate for Figure 2 (b) as below. | Language | Baseline | w/ MT X<->En | |---|---|---| | es | 90.9 | 95.2 | | de | 85.6 | 93.2 | | fr | 92.4 | 94.1 | | it | 82.4 | 91.2 | | cs | 1.7 | 61.2 | | pl | 78.5 | 97.7 | | ru | 95.2 | 85.3 | | zh | 45.0 | 15.0 | | ar | 72.2 | 75.4 | | ja | 76.5 | 73.7 | | hi | 42.2 | 4.8 | **Language Accuracy:** the accuracy of translations in the correct target language. Higher indicates better. | Language | Baseline | w/ MT X<->En | |---|---|---| | es | 8.2 | 2.0 | | de | 4.0 | 0.0 | | fr | 6.5 | 0.6 | | it | 5.1 | 0.6 | | cs | 7.6 | 0.6 | | pl | 2.3 | 0.0 | | ru | 0.0 | 7.4 | | zh | 2.8 | 17.3 | | ar | 5.1 | 18.7 | | ja | 0.3 | 0.3 | | hi | 1.4 | 43.3 | **Empty Rate:** the proportion of outputting empty translations. We noticed that zero-shot SLT translation also suffers from off-target translation, particularly for those languages distant from English: for example Baseline only has a language accuracy of 1.7, 45.0 and 42.2 for Cs, Zh and Hi, respectively. Adding parallel translation data generally improves the translation language accuracy, such as 1.7/78.5 to 61.2/97.7 for Cs/Pl. But there are also exceptions, like Zh and Hi, where the accuracy reduces from 45.0 and 42.2 to 15.0 and 4.8. A deeper inspection reveals that jointly training with translation data leads to more empty outputs for these languages: the empty rate increases from 2.8/1.4 to 17.3/43.3 for Zh/Hi. We argue that this may be because 1) these languages have significantly less parallel MT data, e.g. Hi only has 1.2M examples, and 2) the parallel corpus from MADLAD-400 can also be quite noisy. We will add these results in our revised version. --- Rebuttal Comment 1.1: Comment: Thank you for further addressing my concerns. The new results in the zero-shot direction are interesting. I will keep my positive score.
Summary: This paper presents an approach to Sign Language Translation (SLT) that aims to scale the field by addressing limitations in data, model size, and the number of translation directions. The authors' key contributions are: * Data Scaling: The authors leverage diverse and large-scale datasets, including noisy multilingual YouTube SLT data and augmented SLT data generated from video captions, enabling more robust pretraining. * Model Scaling: They pretrain SLT models on various sizes, initially fine-tuning them with pretrained (m/By)T5 models, demonstrating the impact of larger models on performance. * Cross-lingual and Cross-modal Transfer: The authors show that cross-lingual and cross-modal transfer from pretraining on multilingual data improvesSLT performance, even enabling zero-shot translation, translating sign language to a spoken language without explicit training in that direction. * Open-domain SLT Evaluation: The models are finetuned on five downstream open-domain SLT benchmarks covering five different sign languages, leading to substantial quality improvements over existing state-of-the-art methods. Strengths: * The paper and approach is well motivated. * The experiments are generally well thought out. * The writing quality and analyses are generally good and informative. e.g., Sec 4.2 brought up questions that I had been wondering and did a nice job answering. * The results relative to SOTA are compelling. Weaknesses: There are multiple references that claim to be on arXiv but don't seem to exist including the FLEURS-ASL paper. I found at least three instances: * Garrett Tanzer. Fleurs-asl: Including american sign language in massively multilingual multi- task evaluation. arXiv, 2024. * Garrett Tanzer. Fingerspelling within sign language translation. arXiv, 2024. * Garrett Tanzer and Biao Zhang. Youtube-sl-25: A large-scale, open-domain multilingual sign language parallel corpus. arXiv, 2024. FLEURS-ASL#0 only has 353 sentences but is used for a large percentage of the experiments. I have some question about whether or not these results are likely to generalize, especially given that these sentences were generated by a single person. Given that the FLEURS-ASL paper isn't actually on arXiv (as of July 13) I don't have a way to understand what types of sentences these include. it's important to understand the biases and breadth of what is covered. Lack of Comparative Analysis: While the paper surpasses the previous state-of-the-art, a more in-depth comparative analysis of other recent SLT methods, including their strengths and weaknesses, would strengthen the paper's argumentation. Yes, Table 4 does show SOTA numbers, but it would be useful to contrast the approaches. Minor: Phoenix is referred to in Table 4 and 8 descriptions but only acronyms are used in the tables themselves -- it's unclear which acronym is which. Technical Quality: 3 Clarity: 3 Questions for Authors: See questions above. Especially, can you explain the missing references and talk through characterization of the FLEURS-ASL dataset? Can you also confirm that you are not fine-tuning on FLEURS-ASL#0? Regarding Table 5 and related analysis, is there a reason why BLEURT could have a much higher correlation compared to BLEU or ChrF? How accurate are the mediapipe landmarks on messy YouTube data? Have you looked at correlations between landmark accuracy and translation quality? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Seems reasonable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments! **Re: the missing references** Thanks for pointing this out! Please note that our work is built on prior work discussed through personal correspondence with the authors before publication (hence our use of FLEURS-ASL#0 only, which was available to us before the full dataset was complete). YouTube-SL-25 has since been published to arxiv (https://www.arxiv.org/abs/2407.11144). According to our communication with the authors, other papers should be released publicly very soon, including the FLEURS-ASL dataset. **Re: characterization of the FLEURS-ASL dataset? Can you also confirm that you are not fine-tuning on FLEURS-ASL#0?** FLEURS-ASL#0 is a translation of 353 sentences from FLORES/FLEURS devtest set (https://github.com/facebookresearch/flores/tree/main/flores200) from English text to ASL by a Certified Deaf Interpreter, with at least 5-6 hours of preparation for each 1 hour of recorded content. Note the translations are performed in groups of sentences, not in isolation, since FLORES is constructed from documents. In this way, FLEURS-ASL represents itself as the first massively multilingual SLT evaluation benchmark, including American Sign Language and 200 spoken target languages; we evaluated the translation from ASL to 42 spoken languages in this study. Human performance on a random subset of this data (measured with a native Deaf ASL signer with credentials in sign language education) is about 14.8 BLEU / 63.4 BLEURT. Regarding model finetuning, we never finetuned our models on FLEURS-ASL#0: all the reported results are for pretrained models only. **Re: a more in-depth comparative analysis of other recent SLT methods, including their strengths and weaknesses, would strengthen the paper's argumentation.** We’d like to highlight that the main focus of this paper is to understand and explore how pretraining data, model size, and sign language scaling improves SLT, which to the best of our knowledge has rarely been studied in the literature. While there are many other intriguing SLT methods recently, most of them focus on advanced modeling and/or application of large language models, which is beyond the scope of our study. Also note we didn’t make any claims that our method is better than the others. **Re: Phoenix is referred to in Table 4 and 8 descriptions** This is a typo and we will fix it in our next version. **Re: is there a reason why BLEURT could have a much higher correlation compared to BLEU or ChrF?** We hypothesize that BLEURT measures semantic similarity which captures something beyond simple exact n-gram match as in BLEU/ChrF. For example, the model often starts learning SLT by outputting some key words/phrases correctly. Such weak signals can hardly be captured by BLEU/ChrF. **Re: How accurate are the mediapipe landmarks on messy YouTube data?** This is a great question! Our simple evaluation by eye-balling some landmark examples showed acceptable results. But we believe the landmarks generated from mediapipe include noises with varying degrees for YouTube data, particularly when there are multiple signers in the video. Still, our scaling method could capture useful signals from these data. We leave the study of how landmarks accuracy affects translation quality to the future.
Summary: This paper attempts to advance the development of sign language translation (SLT) technology by using large-scale pre-training data, expanding model size, and adding translation directions. Through extensive experiments, the authors have drawn many useful conclusions. Experiments show that this work can achieve the best results in multiple open-domain SLT benchmarks covering multiple sign languages, although the translation quality needs to be further improved to meet practical needs. Strengths: - The exploration of sign language translation in large-scale open domain is very valuable. Previously, much sign language translation work was limited to some smaller fields. - The authors conducted a large number of experiments and obtained some useful conclusions, which can guide the future work of sign language translation. - Combining all the gain-enhancing methods together, the authors achieved state-of-the-art results on multiple downstream tasks of sign language translation. Weaknesses: - Unfortunately, the sign language data and code used in this article will not be open source. I also can't seem to find the corresponding paper about "Youtube-sl-25: A large-scale, open-domain multilingual sign language parallel corpus" for the YT-Full dataset, which makes it difficult for later generations to reproduce this work. - As far as I know, using the original RGB sequence as input often achieves better results than simply using the pose sequence as sign language input because it contains more information. Can the author explain why the RGB sequence is not used as input? - The author does not seem to state the specific computing resources and time spent on the experiment in section 3. Technical Quality: 2 Clarity: 3 Questions for Authors: see Weaknesses Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments! **Re: the sign language data and code used in this article will not be open source** Please note that YouTube-SL-25 has been released on arXiv (https://www.arxiv.org/abs/2407.11144), including release of the clean subset of the data. We built on their work through personal correspondence with the authors before it was widely published. We are unable to release the source code because it uses an internal framework that has not been open sourced. **Re: Can the author explain why the RGB sequence is not used as input?** The RGB sequence was not used as input due to the different privacy considerations, as explained in Appendix A. Directly modeling RGB sequences from video sharing sites may raise ethical issues, as highlighted in the work "Towards Privacy-Aware Sign Language Translation at Scale" by Rust et al., and we believe the comparison between pose and RGB sequence is out of scope for our work. It would be an interesting question for future work to evaluate—in light of larger sign language datasets—how much of the gap between pose and RGB inputs is actually due to extra information in the RGB input vs. availability of pretrained vision encoders. **Re: The author does not seem to state the specific computing resources and time spent on the experiment in section 3.** As stated in line 160, Section 3, we moved the details for computing resources and time to Appendix B.1. Specifically, we pretrained models up to 1M steps using 64/64/128 TPU-v3 chips for Base/Large/XL, which takes about 7~20 days depending on the model scale and resource condition.
Summary: The paper focuses on scaling sign language translation (SLT) by leveraging large-scale pretraining data, increasing model size, and expanding the number of translation directions. The study demonstrates the effectiveness of data/model scaling and cross-lingual cross-modal transfer in improving SLT performance across multiple benchmarks. The authors showcase substantial quality improvements in SLT through scaling, surpassing previous state-of-the-art results by wide margins. Strengths: 1 The paper pushes the frontier of SLT through large-scale pretraining and the exploration of various data sources. This approach demonstrates a novel application of scaling techniques in the SLT domain. 2 The research employs a rigorous methodology, including the use of different pretraining tasks and modalities. The thoroughness of the experimental design and validation is evident. 3 The presentation is clear and well-structured, allowing for an easy understanding of the research approach and findings. The logical flow of the paper aids in comprehending complex concepts. 4 The work has significant potential to advance open-domain SLT for multiple sign languages through scalable methods. The substantial quality improvements over previous state-of-the-art results highlight the impact of the research. Weaknesses: 1 Lack of discussion on the impact of model architectures and training strategies. The study lacks an in-depth exploration of how different model architectures or training strategies could affect SLT performance, potentially limiting the understanding of scalability and generalization. 2 Limited exploration of the robustness to variations in data quality and model complexity. The research does not extensively discuss the robustness of the proposed methods to changes in data quality and model complexity, which could impact the robustness of the study results. Technical Quality: 4 Clarity: 3 Questions for Authors: 1 Have the authors considered the impact of different model architectures or training strategies on SLT performance, and how these factors could influence scalability and generalization? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: This paper adequately discusses the limitations in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments! **Re: how different model architectures or training strategies could affect SLT performance; how these factors could influence scalability and generalization?** Thanks for this question! Firstly, it would be great if you could provide more context about the "model architectures and training strategies" you're interested in. According to our experiments in Table 1, Section 4.1, model architectures and training strategies have non-ignorable influence on SLT performance. We provided ablations for different T5 model families, covering T5, mT5 and ByT5, and model sizes at different scales, from Base, Large to XL. While these models follow the same encoder-decoder Transformer-based architectures, they differ significantly in the pretraining corpus (c4 vs. mc4), model parameter allocation, and the vocabularies. Two intriguing observations from Table 1: 1) ByT5 performs generally better than T5 and mT5 across scales, even though ByT5 and mT5 used the same pretraining corpus. 2) Model scaling doesn’t necessarily lead to improved performance; in particular, we observed consistent worse performance for Large compared to Base. We acknowledge the existence of different modeling variations, such as decoder-only Transformer (LLaMa, Gemma) and non-Transformer model (Mamba). We believe that different architectures and training strategies endow the model with different inductive biases that affect its generalization and adaptability substantially to cross-modality tasks like SLT. But exhaustively exploring how these variations affect SLT performance is beyond the scope of this study. **Re: the robustness of the proposed methods to changes in data quality and model complexity** Thanks for this question! The results in Table 4 and the YouTube-ASL performance [1] could demonstrate the robustness of our method. Please notice that YT-ASL is a noisy superset of YouTube-ASL, which includes significantly more but lower-quality sign language data (~2800 hours vs. ~1000 hours). Based on Table 4, using YT-ASL (ID: 2) achieves a BLEURT score of 51.74 on How2Sign, substantially outperforming the YouTube-ASL result, 46.63. Adding noisy multilingual sign language data (ID: 6) further increases the performance to 53.51. We will add a discussion on the robustness of our methods in the revised version following your suggestion! [1] Uthus et al., 2023. YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus --- Rebuttal Comment 1.1: Comment: The author's response addressed my question, so I still keep a positive rating.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Building a stable classifier with the inflated argmax
Accept (poster)
Summary: This paper studies and proposes a new theoretical framework to study algorithmic stability for multiclass classification with a focus on a stable class selection given predicted class probability. Based on the proposed theory, the proposed selection criterion based on bagging called "inflated argmax" was proposed. Theoretical analysis suggests that the proposed inflated argmax is stable, while also give a relatively tight decision for the candidate prediction set. Simple experiments using Fashion-MNIST were also provided to show the stability of inflated argmax as well as how tight the candidate set can be. Strengths: 1. The paper provides strong theoretical framework for future study of stable selection criteria for the multiclass classification problem given predicted class probability. 2. The proposed notion of $\epsilon$- compatible is simple to understand and a reasonable extension of classical algorithmic stability noted in Definition 2. I feel this notion is neat and also very useful for further analysis in Proposition 4 and beyond. 3. The simple rule that satisfies the stability but could give a looser quality than inflated argmax was also provided to give a better understanding of the problem. The study was quite detailed in Appendix D., which I find quite amazing. 4. Despite the complicated notion of inflated argmax, the efficient computation of selection rule once bagging is done is nicely provided in Proposition 10. Weaknesses: 1. Experiments were quite weak in my opinion to highlight the effectiveness of inflated argmax. For example, more reasonable baseline that has larger than 1 $\beta_{\mathrm{size}}$ could have been provided to show that inflated argmax is quite favorable if it allows to predict more than one class. Curreent baselines are restrictive in the sense that only one class can be chosen (argmax). Several improvements could also be done (please see the questions section for possible improvements). 2. The proposed inflated argmax requires bagging, which can be quite computationally expensive, or prohibitive in some scenarios under limited resources or large dataset/model. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can we also see the performance of inflated argmax *without bagging* in the experiment? In my understanding, this should also be possible with the proposed efficient algorithm. This can be useful to see how important the bagging method is needed for the proposed algorithm. 2. Visualizing the number of bags needed to perform reasonably in the experiment can also be useful. Because it can be expensive if we use more sophisticated algorithm or using larger dataset. 3. In deep learning, overconfident problem of probability can often be an issue when one wants to use probability score. How does this problem affect the proposed inflated argmax in practice? 4. Can we know that it is likely that the second class to be chosen as a candidate class is likely to be the second-largest probability class? 5. Is it reasonable to say that precision is similar given different average precision and standard error provided (Lines 293--294)? 6. I was wondering if there are other possible baselines in related work that could be used in the experiments? 7. Suggestions on simple baselines 7.1 Top-2 class prediction, size of beta will be 2. 7.2 Predicting classes until sum of probability becomes a certain constant, e.g., if we set threshold = 0.8 and prediction probability vector is [0.1, 0.1, 0.1, 0.2, 0.6], then we will pick the last two classes (0.2 and 0.6). This simple idea looks intuitive and gives flexible set and this is the most simple baseline I had in mind that could be effective when I first read this paper. How do you think about this baseline? I think the core contribution of this paper is the theory of stable classification rule. Therefore, although the experiment looks quite weak, my current rating is on the accept side (weak accept). I know it can be too much to ask to improve experiments a lot and I do not expect authors to fix everything according to my suggestions. But I would appreciate if the authors can reply to make sure that my understanding is correct. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Discussion of limitations of the proposed method and potential future work were appropriate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *“Experiments were quite weak in my opinion to highlight the effectiveness of inflated argmax. For example, more reasonable baseline that has larger than 1 𝛽_size could have been provided to show that inflated argmax is quite favorable if it allows to predict more than one class. Curreent baselines are restrictive in the sense that only one class can be chosen (argmax). Several improvements could also be done (please see the questions section for possible improvements).”* Please see discussion point 1 in our global response. 2. *“The proposed inflated argmax requires bagging, which can be quite computationally expensive, or prohibitive in some scenarios under limited resources or large dataset/model.”* The inflated argmax operates on the predicted probabilities, so it does not require bagging per se. The inflated argmax on its own is very computationally efficient. Please see discussion point 2 in our global response for discussion on the computational cost of bagging. 3. *“Can we also see the performance of inflated argmax without bagging in the experiment? In my understanding, this should also be possible with the proposed efficient algorithm. This can be useful to see how important the bagging method is needed for the proposed algorithm.”* We have added this comparison—please see the pdf attached to our global response. As we can see, both bagging and the inflated argmax are important for selection stability. 4. *“Visualizing the number of bags needed to perform reasonably in the experiment can also be useful. Because it can be expensive if we use more sophisticated algorithm or using larger dataset.”* We have rerun the experiment with a larger number of bags—please see the pdf attached to our global response. As you can see, this dramatically improves selection stability, especially with the inflated argmax. In our revised appendix, we will investigate more thoroughly how the number of bags impacts stability in practice. 5. *“In deep learning, overconfident problem of probability can often be an issue when one wants to use probability score. How does this problem affect the proposed inflated argmax in practice?”* This is an interesting question. Overconfident probability scores mean that the top label is ahead by a large margin, so the inflated argmax will frequently return a singleton if $\epsilon$ is sufficiently small. In our global response, per your suggestion, we have attached a pdf which shows the stability of the base algorithm with the inflated argmax. On its own, the inflated argmax does not lead to selection stability *precisely because of overconfident probabilities.* Bagging addresses the problem of overconfidence, and then the inflated argmax captures the fact that several labels have similar probability scores after bagging. 6. *“Can we know that it is likely that the second class to be chosen as a candidate class is likely to be the second-largest probability class?”* Yes, this is guaranteed by the 3rd statement of Proposition 9. 7. *“Is it reasonable to say that precision is similar given different average precision and standard error provided (Lines 293--294)?”* Yes, it is reasonable. For example, if we compare the average precision between argmax+subbagging and inflated-argmax+subbagging, the pooled Z-score is around 1.5, which is not statistically significant. Considering further that we’re making multiple comparisons in the table, it would be unreasonable to say the rates are very different. 8. *“I was wondering if there are other possible baselines in related work that could be used in the experiments?”* and *“Suggestions on simple baselines 7.1 Top-2 class prediction, size of beta will be 2. 7.2 Predicting classes until sum of probability becomes a certain constant, e.g., if we set threshold = 0.8 and prediction probability vector is [0.1, 0.1, 0.1, 0.2, 0.6], then we will pick the last two classes (0.2 and 0.6). This simple idea looks intuitive and gives flexible set and this is the most simple baseline I had in mind that could be effective when I first read this paper. How do you think about this baseline?”* We have added both of your suggestions and others. Please see discussion point 1 in our global response. --- Rebuttal Comment 1.1: Title: Thank you for the author feedback. Comment: Thank you very much for the author's feedback. I have read the other reviews and the rebuttal. Thank you for the explanation and additional experiments. I agree with the Author's rebuttal 2 regarding the core contribution. The core contribution of this paper is laying the foundation for stability classification by introducing several notions on classifier stability mentioned in this paper. This idea is novel to the best of my knowledge. Bagging is indeed more expensive than without Bagging, but the paper did a great job demonstrating that a theoretically justified algorithm archives this notion of stability. Having a more efficient algorithm or more analysis of this problem setting can be further studied in the future. I appreciate that other algorithms that are not necessarily stable but somewhat reasonable to try in this problem setting were added in the experiment section. This can motivate the practical usefulness of inflated argmax. Classification with abstention/rejection is indeed another framework that has been well-studied to abstain from making a prediction when the classifier is not confident. I treat this research direction as a different direction for making classifiers more reliable by outputting a flexible-size prediction set with some guarantee. This approach can also help machine learning systems to collaborate with humans to have better judgment and therefore worth investigating further. For these reasons, I increase the score to 7 (Accept). --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: We sincerely appreciate your constructive comments throughout this process. We look forward to integrating your suggestions into our revision.
Summary: An algorithm is considered stable if small perturbations in the training data do not result in significant changes to its output. Stability has been previously explored in the context of regression. This paper extends this concept to the multiclass framework, where previous work has focused on stability in terms of output probability rather than label selection processes. Given that argmax is not a continuous function, it can lead to instability in the classifier. To address this issue, the authors first define stability for the multiclass domain and then demonstrate that a stable classifier alone is insufficient; a corresponding label selection strategy is also necessary. The authors propose a framework combining bagging and inflated argmax to derive such a combination, which they validate experimentally, showing that their approach achieves higher stability while maintaining precision at a minimal cost. Strengths: 1. The authors introduce a novel definition of algorithmic stability in classification, shifting the focus from output probability to predicted labels. They demonstrate that, according to this definition, a stable classifier is insufficient for achieving overall stability. Furthermore, they establish a connection between this notion of stability and classical algorithmic stability concepts. Building on these insights, the authors propose an approach that leverages bagging and inflated argmax to achieve said stability. 2. It is essential to emphasize that the proposed framework operates without assuming any specific distribution on the data. Additionally, it does not rely on the number of classes or dimensionality of the covariates, making it versatile and applicable to various algorithms. 3.The experimental findings demonstrate the stability of the proposed framework. Moreover, they reveal the negative impact of argmax on stability. Notably, the results show that the proposed approach incurs only a minor penalty to precision, underscoring its effectiveness. Weaknesses: 1.The authors point out my primary concern in their discussion section, highlighting the practicality of the proposed framework. Notably, bagging is an computationally expensive procedure, and while the authors suggest that parallelization can mitigate this issue, I agree that this does not necessarily translate to a reduction in overall cost. This becomes particularly challenging when dealing with large datasets commonly encountered in modern classifiers (e.g., image classification), where the sheer scale of computation required can be prohibitively expensive even with multiple GPUs. 2. The experimental section relies heavily on a single dataset and classifier combination. To fully demonstrate the benefits of the proposed framework, I believe it would be necessary to conduct a more comprehensive set of experiments. Specifically, I suggest that the authors provide additional evidence through experimentation to illustrate how the slight tradeoff in precision is justified by the gained stability in practice. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. From the definition of stability given in 4, it is evident that the authors trained the classifiers multiple times while removing training samples. How does the overall average of average precision look like in such cases? Such a metric might show the advantage the proposed framework. 2. My other main question is related to practicality. I would like to know the author's thought more regarding that limitation. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *“The authors point out my primary concern in their discussion section, highlighting the practicality of the proposed framework. Notably, bagging is an computationally expensive procedure, and while the authors suggest that parallelization can mitigate this issue, I agree that this does not necessarily translate to a reduction in overall cost. This becomes particularly challenging when dealing with large datasets commonly encountered in modern classifiers (e.g., image classification), where the sheer scale of computation required can be prohibitively expensive even with multiple GPUs.”* and *“My other main question is related to practicality. I would like to know the author's thought more regarding that limitation.”* Please see discussion point 2 in our global response, and let us know if you have any additional concerns regarding practicality. 2. *“The experimental section relies heavily on a single dataset and classifier combination. To fully demonstrate the benefits of the proposed framework, I believe it would be necessary to conduct a more comprehensive set of experiments. Specifically, I suggest that the authors provide additional evidence through experimentation to illustrate how the slight tradeoff in precision is justified by the gained stability in practice.”* We agree that additional experiments may be useful for a more detailed understanding of this tradeoff, and would be happy to add more experiments to the supplement to pose the same questions under different settings and different data generating processes, in our revision. However, we would like to point out that, in our view, this experimental comparison is not central to the main point of the paper. While from an empirical point of view, our method versus a competing method are simply two different points along the tradeoff between precision and stability, from the theoretical point of view they are not on equal footing at all: our method offers a theoretical guarantee of selection stability, and there is no guarantee of this type at all for competing options. 3. *“From the definition of stability given in 4, it is evident that the authors trained the classifiers multiple times while removing training samples. How does the overall average of average precision look like in such cases? Such a metric might show the advantage the proposed framework.”* In the table below, we compare $\beta_{\text{prec}}$ (as defined in the paper) to the "overall average of average precision" over all $500$ leave-one-out models. In this experiment, there is not a substantial difference between $\beta_{\text{prec}}$ (the original precision measure) and $\beta^{\text{LOO}}_{\text{prec}}$ (this new measure). | | Results for Base Algorithm $\mathcal{A}$ | | | Results with Subbagging $\widetilde{\mathcal{A}}_m$ | | | -------- | ------- | ------- | ------- | ------- | ------- | | | $\beta_{\text{prec}}$ | $\beta^{\text{LOO}}_{\text{prec}}$ | | $\beta_{\text{prec}}$ | $\beta^{\text{LOO}}_{\text{prec}}$ | | $\text{argmax}$ | 0.879 (0.003) | 0.884 (0.002) | | 0.893 (0.003) | 0.893 (0.003) | | $\text{argmax}^\varepsilon$ | 0.873 (0.003) | 0.879 (0.003) | | 0.886 (0.003) | 0.886 (0.003) | | $\text{top-}2$ | 0.000 (0.000) | 0.000 (0.000) | | 0.000 (0.000) | 0.000 (0.000) | | Thresholding $\Gamma^*_{0.8}$ | 0.756 (0.004) | 0.761 (0.004) | | 0.704 (0.005) | 0.703 (0.005) | | NDC for $F_1$ [1] | 0.832 (0.004) | 0.836 (0.003) | | 0.825 (0.004) | 0.825 (0.004) | --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: I thank the authors for taking the time to addressing my concerns. I also see that some of these concerns were also discussed with other reviewers. After going through all the discussions, I think that my concerns have been addressed quite a bit. Thus I have decided to improve my score.
Summary: This paper focuses on the stability of learning multiclass classfiers. It proposes a notion of selection stability for the learning algorithm as well as a modified argmax that returns a set of labels. Evaluations have been conducted using FashionMNIST and simple models. Strengths: - This paper focuses on the problem of classifier stability, which has high practical interest. - This paper proposes a relaxed form of argmax, which contributes concretely to the problem. - Propositions and theoretical relations to existing results are soundly presented. Weaknesses: - The presentation of the proposed contribution can be improved. - The core proposal of inflated argmax is introduced in section 3.2, which is too late. - Bagging is not part of proposal, and shows low relation to the contribution, which can be demonstrated using fewer space. - Additional properties of the inflated argmax is also not directly related to the problem. - "learning stability by leave-one-out data" and "prediction stability when using argmax for similar probabilities" seem to be mixed and not clearly addressed. - Some sentences such as `workflow is fundamentally incompatible with the goal of algorithmic stability` is too extreme, for example, simple streament using proper regularization terms can somehow mitigate the problem. - The demonstration of empirical evaluation does not match the last sentence in abstract. - The combination of $\epsilon$-inflated argmax on the base learning algorithm should be compared. - It is also nice to show results on at least two datasets. Technical Quality: 3 Clarity: 2 Questions for Authors: - How tail stability and selection stability relates to $\epsilon$-compatibility differently? To confirm, do bagging classfiers only satisfy tail stablity? Are there other algorithm designs satisfy these stabilities? - How to better interpreter the empirical evaluation results. With similar accuracy shown in Table 1, the proposed method should show better stability in Figure 2? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Practical limitations are presented in section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *“The presentation of the proposed contribution can be improved.”* a. *“The core proposal of inflated argmax is introduced in section 3.2, which is too late.”* Thank you for this feedback. We agree that introducing inflated argmax earlier in the paper would be better and will try to reorganize if possible in order to do so. However, we would like to point out that the framework of selection stability and its connections to classical algorithmic stability are core contributions, and these are given in Section 2 -- in other words, the inflated argmax definition is not the first novel contribution presented in the paper. b. *“Bagging is not part of proposal, and shows low relation to the contribution, which can be demonstrated using fewer space.”* While the section on bagging is indeed background material, in order to be able to use bagging as the first stage of our two-stage procedure, we need to recap the requisite tail stability result for bagging -- this is an essential ingredient of our selection stability framework. However, in our revision we will add a sentence to the start of this section to clarify that this is background material and explain its role in what follows. c. *“Additional properties of the inflated argmax is also not directly related to the problem.”* Section 3.2.2 (on additional properties of the inflated argmax) covers a lot of highly relevant material. We show how to compute our inflated argmax. The properties in Proposition 9 illustrate how the inflated argmax in many ways acts as a natural extension of the argmax. Our optimality result, Proposition 11, connects back to these additional properties by establishing that the inflated argmax is the *most parsimonious $\epsilon$-compatible extension of the argmax*. In our revision, we will add clarifications to the text to explain the significance of these results and how they relate to the main aims of the paper. d. *“‘learning stability by leave-one-out data’ and ‘prediction stability when using argmax for similar probabilities’ seem to be mixed and not clearly addressed.”* Neither of these phrases are used in the paper. Could you please clarify what you mean? e. *“Some sentences such as workflow is fundamentally incompatible with the goal of algorithmic stability is too extreme, for example, simple streament using proper regularization terms can somehow mitigate the problem.”* It may be possible that regularization can make the predicted probabilities more stable. However, as we point out in Sections 1.1 and 2.3, this is not enough to ensure selection stability *because the argmax is discontinuous.* The issue is that *selection* is unstable whenever the model is nearly equally confident in multiple classes. 2. *“The demonstration of empirical evaluation does not match the last sentence in abstract.”* The last sentence of the abstract states, "we demonstrate that the inflated argmax provides necessary protection against unstable classifiers, without loss of accuracy." This does match our empirical evaluation. In Figure 2 we see a dramatic improvement in stability (see also the revised Figure 2 attached as a pdf to the global rebuttal). While $\beta_{\text{prec}}$ in Table 1 decreases slightly with the inflated argmax, the difference is not statistically significant. Specifically, comparing the average precision between argmax+subbagging and inflated-argmax+subbagging, the pooled $Z$-score is around 1.5, which is not statistically significant. Despite having $N=10,000$ test samples, there is not a detectable difference in $\beta_{\text{prec}}$ before and after employing the inflated argmax. We will make this point more clearly in our revision. a. *“The combination of 𝜖-inflated argmax on the base learning algorithm should be compared.”* We have added this baseline to our table: please see discussion point 1 in our global response. We have also added it to the revised Figure 2, which is attached in our global response. b. *“It is also nice to show results on at least two datasets.”* We agree that additional experiments may be useful, and would be happy to add more experiments to the supplement to pose the same questions under different settings and different data generating processes, in our revision. However, the main contribution of our paper is theoretical: we show how to guarantee selection stability for *any classifier* on *any dataset*. Our emphasis is thus on the idea and the theoretical contribution. Our experiment in Section 4 and simulation in Section D.2 serve as illustrations of these results to show how they work in practice. That being said, we will add more experiments to the supplement to reinforce these results. 3. *“How tail stability and selection stability relates to 𝜖-compatibility differently? To confirm, do bagging classfiers only satisfy tail stablity? Are there other algorithm designs satisfy these stabilities?”* For the first question, we address the relationship between all three criteria in Proposition 4. We can expand the discussion around Proposition 4 in our revision to explain the relationship more clearly. For the latter two questions, other meta-algorithms may satisfy tail stability. This is an interesting open question. 4. *“How to better interpreter the empirical evaluation results. With similar accuracy shown in Table 1, the proposed method should show better stability in Figure 2?”* Our proposed method does show much better stability. Note in Figure 2 that smaller curves are more stable. In our revised Figure 2 (attached to our global response), the instability measure $\delta_j = 0$ for *every* test point for our proposed method. --- Rebuttal Comment 1.1: Comment: I thank authors detailed and patient response and I have confirmed my shared concern over limited experiments and computational costs of bagging. I thank authors for clarifying the organization of the manuscript, as well as my missing attention on section 3.2.2 and proposition 4. I also thank the improved figure 2. For my concern 1.e. for definition 1, I missed the notation of feature $x$ in $C$ and thought this considers only the classifiers sets output by an learning algorithm. --- Rebuttal 2: Title: Thanks for your reply Comment: We sincerely appreciate your constructive comments throughout this process. We look forward to integrating your suggestions into our revision.
Summary: This submission studies the problem of making set-valued predictions, in which the classifier should strive for an optimal balance between the correctness (the true class is among the candidates) and the precision (the candidates are not too many) of its prediction. Yet, this submission mentions one method for this purpose [1], which predicts, for each query instance, a Bayes-optimal set valued prediction optimizing the F-measure. It seems to miss a discussion on (several) related methods, which are constructed based on both the probability theory and imprecise probability, e.g., the ones discussed in [3, 4, 6] and references therein. On the abstract level, the inflated argmax presented in Section 3.2 might be seen as a way to distort/imprecise the given singleton conditional probability distribution. It might enlarge the set of existing approaches targeting similar purposes, such as the ones discussed in [2, 5] and references therein, meaningfully. The proposed algorithm is compared with a variant of LeNet-5, implemented in PyTorch [PGML19] tutorials as GarmentClassifier(), and its ensemble version on the Fashion-MNIST data set. Empirical evidence suggests that the proposed algorithm provides better $\beta_{\text{prec}}$ than the single classifier, but worse than the ensemble version. The average set size produced by the proposed algorithm seems to be reasonably small. Results on the selection instability $\delta_j$ defined in (4), i.e., an approximate of the loss version of the stability $\delta$ defined in (1), which essentially measures the robustness of the set-valued predictions under tiny changes in the training data set, seems to be in favor of the proposed algorithm. [1] Del Coz, J. J., Díez, J., & Bahamonde, A. (2009). Learning Nondeterministic Classifiers. Journal of Machine Learning Research, 10(10). [2] Montes, I., Miranda, E., & Destercke, S. (2020). Unifying neighbourhood and distortion models: part I–new results on old models. International Journal of General Systems, 49(6), 602-635. [3] Mortier, T., Wydmuch, M., Dembczyński, K., Hüllermeier, E., & Waegeman, W. (2021). Efficient set-valued prediction in multi-class classification. Data Mining and Knowledge Discovery, 35(4), 1435-1469. [4] Nguyen, V. L., Destercke, S., Masson, M. H., & Hüllermeier, E. (2018, July). Reliable multi-class classification based on pairwise epistemic and aleatoric uncertainty. In 27th International Joint Conference on Artificial Intelligence (IJCAI 2018) (pp. 5089-5095). [5] Nguyen, V. L., Zhang, H., & Destercke, S. (2023, September). Learning Sets of Probabilities Through Ensemble Methods. In European Conference on Symbolic and Quantitative Approaches with Uncertainty (pp. 270-283). Cham: Springer Nature Switzerland. [6] Troffaes, M. C. (2007). Decision making under uncertainty using imprecise probabilities. International journal of approximate reasoning, 45(1), 17-29. Strengths: S1: I think taking into account the instability, due to tiny changes in the training data set, when making set-valued prediction is interesting. Yet, assessing this aspect of classifiers seems to be costly, i.e., requiring a sufficiently large number of leave-one-out models. S2: Empirical evidence seems to be promising. For example, $\beta_{\text{prec}}$ seems to suggest that the proposed algorithm tends to produce a reasonably high proportion of correct singleton predictions. The average set size seems to suggest that the set-valued predictions produced by the proposed algorithms are reasonably small. Empirical evidence regarding the instability defined in (4) seems to be in favor of the proposed algorithm. Weaknesses: W1: I think adding a few more competitors would be useful in assessing the potential advantages of the proposed algorithms. For example, threshold-based algorithms, which produce Bayes-optimal predictions of F-measure [1], and $U_{65}$ and $U_{80}$ [3], would be a good choice. They would cost $O(L\log(L))$ time to sort the labels according to the decreasing order of the predicted conditional probability masses and select the optimal threshold. W2: Assessing the classifiers with respect to utility-discounted predictive accuracies, such as $U_{65}$ and $U_{80}$ which respectively give small and large rewards for being cautious [7] , would help to further highlight the potential (dis)advantages of the classifiers. W3: Moreover, an imprecise classifier should abstain (i.e., provide set-valued predictions) on difficult cases, on which the precise classifier is likely to fail. This would be assessed in different ways. For example, one might consider reporting the correctness of the cautious classifiers in the case of abstention versus the accuracy of the precise classifiers [4]. W4: The tolerance $\epsilon$ is set to be $.05$. Please provide arguments supporting this choice. Yet, choosing this hyperparameter using a validation set might be challenging. Providing a sensitivity analysis on the choice of $\epsilon$ would be helpful. [1] Del Coz, J. J., Díez, J., & Bahamonde, A. (2009). Learning Nondeterministic Classifiers. Journal of Machine Learning Research, 10(10). [3] Mortier, T., Wydmuch, M., Dembczyński, K., Hüllermeier, E., & Waegeman, W. (2021). Efficient set-valued prediction in multi-class classification. Data Mining and Knowledge Discovery, 35(4), 1435-1469. [4] Nguyen, V. L., Destercke, S., Masson, M. H., & Hüllermeier, E. (2018, July). Reliable multi-class classification based on pairwise epistemic and aleatoric uncertainty. In 27th International Joint Conference on Artificial Intelligence (IJCAI 2018) (pp. 5089-5095). [7] Zaffalon, M., Corani, G., & Mauá, D. (2012). Evaluating credal classifiers by utility-discounted predictive accuracy. International Journal of Approximate Reasoning, 53(8), 1282-1301. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1: Please refer to "Weaknesses" for detailed comments and suggestions for further revision. Q2: The notion of selection stability $\delta$ at sample size $n$ seems to be interesting. I guess the experimental setting provides the necessary information to compute the value of the selection stability $\delta$ given in (1) . To provide an idea of how empirical evidence may differ from the theoretical results, it might be helpful to compute that value and report the proportion of test instances whose empirical values $1 - \delta_j$ are smaller than $\delta$. Q3: I guess re-defining (4) as $\delta_j = \frac{1}{500}\sum_{k=1}^{500} 1{s(\hat{p}(\tilde{X}_j)) \cap s(\hat{p}^{\setminus i_k}(\tilde{X}_j)) = \emptyset}$ and changing the description of Figure 2 accordingly (to be consistent with the notion of the selection stability $\delta$ defined in Definition 1) might make things a bit clearer. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: L1: Discussions on related work might need to be extended. L2: the empirical study might need to be enlarged with closely related algorithms. Please refer to "Weaknesses" for detailed comments and suggestions for further revision. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *“W1: I think adding a few more competitors would be useful in assessing the potential advantages of the proposed algorithms…”*, *“W2: Assessing the classifiers with respect to utility-discounted predictive accuracies, …”* and “L2: the empirical study might need to be enlarged with closely related algorithms.” Please see discussion point 1 in our global response. Below we compare algorithms based on $u_{65}$ and $u_{80}$ per your suggestion in W2. However, we stress that the inflated argmax $\text{argmax}^\varepsilon$ is *not* meant to compete with any of these alternative set-valued classification methods, nor is it optimizing for $u_{65}$ or $u_{80}$. It is meant to improve upon the argmax by guaranteeing selection stability. None of these other methods have been shown to satisfy selection stability, and even if they were, ours satisfies the optimality result of Proposition 11. || Results for |Base Algorithm $\mathcal{A}$||| Results with |Subbagging $\widetilde{\mathcal{A}}_m$|| | --- | --- | --- | --- | --- | --- | --- | --- | | | $u_{65}$ | $u_{80}$ | $\beta_{\text{size}}$ | | $u_{65}$ | $u_{80}$ | $\beta_{\text{size}}$ | | $\text{argmax}$ | 0.879 (0.003) | 0.879 (0.003) | 1.000 (0.000) | | 0.893 (0.003) | 0.893 (0.003) | 1.000 (0.000) | | $\text{argmax}^\varepsilon$ | 0.881 (0.003) | 0.883 (0.003) | 1.015 (0.001) | | 0.895 (0.003) | 0.897 (0.003) | 1.015 (0.001) | | $\text{top-}2$ | 0.632 (0.001) | 0.777 (0.001) | 2.000 (0.000) | | 0.631 (0.001) | 0.777 (0.001) | 2.000 (0.000) | | Thresholding $\Gamma^*_{0.8}$ | 0.880 (0.002) | 0.909 (0.002) | 1.248 (0.005) | | 0.868 (0.002) | 0.907 (0.002) | 1.248 (0.005) | | NDC for $F_1$ [1] | 0.889 (0.003) | 0.902 (0.003) | 1.105 (0.003) | | 0.898 (0.003) | 0.915 (0.002) | 1.105 (0.003) | 2. *“W3: Moreover, an imprecise classifier should abstain (i.e., provide set-valued predictions) on difficult cases, on which the precise classifier is likely to fail. This would be assessed in different ways. For example, one might consider reporting the correctness of the cautious classifiers in the case of abstention versus the accuracy of the precise classifiers [4].”* The idea of "abstention" is a useful framework, but in fact, this does actually appear implicitly for any a set valued classifier — if we return the entire set $\hat{S} = $ {$1,\dots,L$}, this means that we are not able to say anything about the true value of the label, i.e., this is equivalent to an abstention. In fact, set-valued classification has the capacity to return strictly more information than a label-or-abstain scheme: if we always either return a label (i.e., $\hat{S} =$ {$y$} for some $y$) or an abstention (equivalently, $\hat{S} =$ {$1,\dots,L$}), we do not have the opportunity to express partial information (for example, $\hat{S} =$ {$y_1,y_2$} — this allows us to express that we have partial information about the true label for a particular sample $X$, i.e., we are allowing adaptivity to our level of uncertainty in different instances). Turning to the question of empirical comparison, in a label-or-abstain scheme, measuring the frequency of abstaining is equivalent to measuring how often $|\hat{S}|=1$, which is related to our precision measure in our experiments. 3. *“W4: The tolerance $\epsilon$ is set to be .05. Please provide arguments supporting this choice. Yet, choosing this hyperparameter using a validation set might be challenging. Providing a sensitivity analysis on the choice of $\epsilon$ would be helpful.”* The tolerance parameter $\epsilon$ has an intrinsic meaning; it is different from a tuning parameter such as the regularization parameter $\lambda$ for Lasso where there is no intrinsic meaning and therefore tuning is the only reasonable way to choose a value. In our setting, $\epsilon$ represents the margin by which we inflate the argmax — for example setting $\epsilon$ = 0.05 can, in a sense, be viewed as saying that an estimated probability of 0.45 versus 0.4 is sufficiently close to be ambiguous in terms of which label should be the “winner”. For this reason, it does not seem appropriate to perform a sensitivity analysis. (Of course, as we see in our theory, it’s also possible for the user to instead choose a desired value of $\delta$, and then specify $\epsilon$ accordingly via Theorem 5; again, here $\epsilon$ would then be determined by the user’s desired value of $\delta$, and does not need to be tuned.) 4. *“Q2: The notion of selection stability $\delta$ at sample size $n$ seems to be interesting. I guess the experimental setting provides the necessary information to compute the value of the selection stability $\delta$ given in (1) . To provide an idea of how empirical evidence may differ from the theoretical results, it might be helpful to compute that value and report the proportion of test instances whose empirical values... are smaller than $\delta$.”* Note that various $\delta_j$ range over different points in the test set $X_j$, and selection stability holds for all test points $x$. Thus, selection stability means that *all* of the $\delta_j$ are at most $\delta$. We will clarify this point in our discussion of the experiment results in our revision. 5. *“Q3: I guess re-defining (4) as... and changing the description of Figure 2 accordingly (to be consistent with the notion of the selection stability $\delta$ defined in Definition 1) might make things a bit clearer.”* Thank you for catching this—we will fix the definition in our revision. 6. *“L1: Discussions on related work might need to be extended.”* In discussion point 1 of our global response, we discuss other set-valued classification methods that we have added to the experiments. We will add discussion of these methods to the related work section, including all of the references you suggest. We also welcome any additional suggestions for related work that we should discuss. --- Rebuttal 2: Comment: Thanks for your detailed response. Additional empirical evidence and discussions indeed suggest more about the potential (dis)advantages of the proposed algorithms. Based on the response, I think we might see $\beta_{\text{pre}}$ as an evaluation metric for classification with rejection, i.e., a predictor will only make a prediction for a query instance if it can make a precise/singleton prediction. This might also imply that whether the set-valued prediction covers the true class or not and whether the set size is large or not are not relevant. In this sense, I think it is fair to say that the proposed algorithms provide advantages, compared to top-2, Thresholding $\Gamma^*_{80}$ and NDC for $F_1$. From their definitions, it would be not hard to see that the set-valued predictions that optimize $u_{65}$, $F_1$, $u_{80}$ and a few other criteria, e.g., ones mentioned in [6], cover the top-ranked class, i.e., the output of $argmax$ (unless there are multiple classes with the highest score). By definition, optimizing $u_{65}$ would provide smaller set-valued predictions, compared to others, and therefore might naturally become a related competitor regarding $\beta_{\text{pre}}$. Given the (set-valued) predictions on the test set, the test set can be partitioned into 2 parts, where the classifier/predictor makes precise predictions and set-valued predictions. As far as I understand from [7] (and [3]), $u_{65}$, $u_{80}$, and other utility-discounted predictive accuracies are designed to assess classifiers/predictors on the entire (test) data set. The additional results suggest that the proposed algorithms provide worse scores compared to NDC for $F_1$, which is also not designed to optimize both $u_{65}$ and $u_{80}$. If one opts for $u_{65}$ and $u_{80}$, I think it is reasonable to optimize them directly. Yet, I think it is difficult to find some specific evaluation criterion/criteria to assess multiple aspects of set-valued predictors. I think it is reasonable to expect a proposed set-valued predictor to demonstrate the advantages regarding at least a few criteria that assess the predictor on the entire (test) data set. Regarding "W3: Moreover, an imprecise classifier should abstain (i.e., provide set-valued predictions) on difficult cases, on which the precise classifier is likely to fail. This would be assessed in different ways. For example, one might consider reporting the correctness of the cautious classifiers in the case of abstention versus the accuracy of the precise classifiers [4].", I guess I should make the point more explicit. As far as I understand, the correctness of the cautious classifiers is the proportion of the time the set-valued predictions cover the true classes. I think such a criterion is designed to assess the cost of making set-valued predictions, i.e., the cautious classifiers should (only) provide set-valued predictions on difficult cases, on which the precise classifier is likely to fail. In my opinion, this criterion should complement the $\beta_{\text{pre}}$, utility-discounted predictive accuracies, set size, and other evaluation criteria, which take into account both precise predictions and set-valued predictions. Yet, I think the notion of selection stability is interesting (as I wrote in the initial review), and it might make the proposed algorithms differ from other cautious classifiers or set-valued predictors. The motivation to go with the proposed algorithms might need to be further elaborated. After due consideration, I keep my initial rating, but I appreciate any additional results and discussions that may strengthen the motivation to go with the proposed algorithms. --- Rebuttal Comment 2.1: Comment: Thank you for your further detailed comments. We have expanded our table to include the two methods you suggest, Set-Valued Bayes-Optimal Prediction (SVBOP) for $u_{65}$ and $u_{80}$. We bold the statistical winners in each column.$^\dagger$ These new methods overall perform somewhat similar to NDC for $F_1$, but as you anticipated, the expected set size for SVBOP-65 is the smallest of the three. However, the expected set size for the inflated argmax is much smaller than these other three methods. Note also that although its $u_{65}$ utility *is* slightly smaller than SVBOP-65 on the test set, the inflated argmax qualifies as a statistical winner in this column as well. Our proposal thus performs competitively along three of the four criteria we have considered ($\beta_{\text{prec}}$, $\beta_{\text{size}}$ and $u_{65}$). We agree that, on its own, we can view $\beta_{\text{prec}}$ as an evaluation metric for classification with rejection, since it requires a singleton prediction. Our motivating goal is to return a singleton as often as possible while achieving a user-specified selection stability level. While there are many different perspectives on set-valued classification, we reiterate that what is most novel about our work is that our proposal has distribution-free theory. This means that, regardless of the dataset or base algorithm used, we can guarantee that our method will be stable. Furthermore, our optimality result shows that the inflated argmax is singleton as often as possible among all $\epsilon$-compatible selection rules. | | Results for Base Algorithm $\mathcal{A}$| | | | | Results with Subbagging $\widetilde{\mathcal{A}}_m$| | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | $u_{65}$ | $u_{80}$ | $\beta_{\text{prec}}$ | $\beta_{\text{size}}$ | | $u_{65}$ | $u_{80}$ | $\beta_{\text{prec}}$ | $\beta_{\text{size}}$ | | $\text{argmax}$ | 0.879 (0.003) | 0.879 (0.003) | **0.879** (0.003) | **1.000** (0.000) | | **0.893** (0.003) | 0.893 (0.003) | **0.893** (0.003) | **1.000** (0.000) | | $\text{argmax}^\varepsilon$ | **0.881** (0.003) | 0.883 (0.003) | **0.873** (0.003) | 1.015 (0.001) | | **0.895** (0.003) | 0.897 (0.003) | **0.886** (0.003) | 1.017 (0.001) | | $\text{top-}2$ | 0.632 (0.001) | 0.777 (0.001) | 0.000 (0.000) | 2.000 (0.000) | | 0.631 (0.001) | 0.777 (0.001) | 0.000 (0.000) | 2.000 (0.000) | | Thresholding $\Gamma^*_{0.8}$ | 0.880 (0.002) | **0.909** (0.002) | 0.756 (0.004) | 1.248 (0.005) | | 0.868 (0.002) | 0.907 (0.002) | 0.703 (0.005) | 1.357 (0.006) | | NDC for $F_1$ | **0.889** (0.003) | **0.902** (0.003) | 0.832 (0.004) | 1.105 (0.003) | | **0.898** (0.003) | **0.915** (0.002) | 0.825 (0.004) | 1.137 (0.004) | | SVBOP for $u_{65}$ | **0.889** (0.003) | 0.901 (0.003) | 0.838 (0.004) | 1.091 (0.003) | | **0.900** (0.003) | **0.915** (0.002) | 0.835 (0.004) | 1.117 (0.003) | | SVBOP for $u_{80}$ | **0.885** (0.003) | **0.910** (0.002) | 0.777 (0.004) | 1.190 (0.004) | | 0.882 (0.002) | **0.915** (0.002) | 0.744 (0.004) | 1.245 (0.005) | Regarding your second suggestion, we looked at, for each imprecise classifier $\hat{s}$, the ratio of "accuracy of standard argmax given $\hat{s}$ abstains" divided by "accuracy of $\hat{s}$ given $\hat{s}$ abstains". We call this the *superfluous inflation*, since if this ratio is close to 1, it means the argmax is correct as often as the set-valued classifier (when the latter abstains), so outputting a set could be seen as overly conservative. We include the table with these results below. The inflated argmax has the smallest ratio, meaning it only abstains in hard cases where it can improve the accuracy by returning a non-singleton set. | | Results for Base Algorithm $\mathcal{A}$ | Results with Subbagging $\widetilde{\mathcal{A}}_m$ | | --- | --- | --- | | | $\beta_{\text{superfluous-inflation}}$ | $\beta_{\text{superfluous-inflation}}$ | | $\text{argmax}^\varepsilon$ | **0.496** (0.005) | **0.504** (0.005) | | $\text{top-}2$ | 0.905 (0.003) | 0.919 (0.003) | | Thresholding $\Gamma^*_{0.8}$ | 0.622 (0.005) | 0.702 (0.005) | | NDC for $F_1$ | 0.531 (0.005) | 0.591 (0.005) | | SVBOP for $u_{65}$ | 0.525 (0.005) | 0.575 (0.005) | | SVBOP for $u_{80}$ | 0.611 (0.005) | 0.689 (0.005) | $^\dagger$ To check whether a given method wins along a given column, we compute the two-sample Z-score, subtracting that method’s value from the highest value in that column and normalizing by the pooled standard error. We say method is a winner if the two-tailed Z-test is *not* statistically significant at 0.05.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their insightful comments and effort in reviewing the paper. Below we discuss two topics that came up in multiple reviews. 1. **Improvements to the experiments section.** Based on feedback from several reviewers, we have expanded our experiments section by adding some additional existing methods as baselines to our table of results. We added several methods for set-valued classification suggested by the reviewers. The old version of our experiment served to illustrate what would happen if, as is most common in classification, you only return a single label. The experiment showed that practical learning algorithms can indeed be unstable for selection, even with bagging. The experiment illustrates our theory that the inflated argmax, combined with bagging, can ameliorate this issue for any dataset and any learning algorithm. Based on feedback, we added the following baselines: - Top-2 class prediction. - Thresholding: Predicting classes until sum of probability becomes at least 0.8, - Non-deterministic classification (NDC) [1] optimized for $F_1$-score. We stress, however, that these methods are not in direct competition with the inflated argmax since, since our stability framework is one of the main contributions of our paper. That is, no existing method for set-valued classification can guarantee selection stability. Below we present the updated table of results. Recall that $\beta_{\text{prec}}$ is the precision, meaning the probability that the algorithm returns the correct singleton, and $\beta_{\text{size}}$ is the average size of the set of candidate labels. Note that the inflated argmax is smaller than all competing set-valued methods and has a higher value of $\beta_{\text{prec}}$. We also reran the experiment with a larger number of bags—**please see the attached pdf for the stability curves**. We find that increasing the number of bags improves stability even further, especially with the inflated argmax. | | Results for Base Algorithm $\mathcal{A}$ || | Results with Subbagging $\widetilde{\mathcal{A}}_m$ | | | -------- | ------- | ------- | ------- | ------- | ------- | | | $\beta_{\text{prec}}$ | $\beta_{\text{size}}$ | | $\beta_{\text{prec}}$ | $\beta_{\text{size}}$ | | $\text{argmax}$ | 0.879 (0.003) | 1.000 (0.000) | | 0.893 (0.003) | 1.000 (0.000) | | $\text{argmax}^\varepsilon$ | 0.873 (0.003) | 1.015 (0.001) | | 0.886 (0.003) | 1.016 (0.001) | | $\text{top-}2$ | 0.000 (0.000) | 2.000 (0.000) | | 0.000 (0.000) | 2.000 (0.000) | | Thresholding $\Gamma^*_{0.8}$ | 0.756 (0.004) | 1.248 (0.005) | | 0.704 (0.005) | 1.354 (0.006) | | NDC for $F_1$ [1] | 0.832 (0.004) | 1.105 (0.003) | | 0.825 (0.004) | 1.137 (0.004) | 2. **Computational cost of bagging.** Several reviewers pointed out that bagging may be computationally very demanding for modern large-scale algorithms. We would like to discuss this point here in a bit more detail (and will also add discussion into our revised paper). First, our paper is the first work demonstrating that guaranteed black-box classifier stability is possible at all — certainly it is very interesting and important to consider whether there are computationally efficient methods that can achieve this aim, but a first question is whether the aim is achievable at all. Our paper lays out the key definitions and theoretical groundwork for this problem, opening up new avenues for future research. Moreover, if more efficient learning algorithms could in future be shown to satisfy tail stability (i.e., without bagging) then the main contribution of our paper—the inflated argmax—will be equally applicable and relevant. The inflated argmax is computationally efficient and, due to the two-stage structure of our theory, can be combined with any method shown to have tail stability at the first stage, i.e., can be separated from bagging. Next, to return to the question of bagging, we would like to justify why we feel that using bagging for the first stage of the procedure may in many cases be very feasible from the computational point of view. While the original definition of bagging uses the conventional bootstrap, where each bag contains as many samples as the original data set, i.e. $m = n$, in our framework we allow for arbitrary bag size $m$, which could be much smaller than the sample size $n$. Massively subsampling the data ($m \ll n$) can actually help scale learning algorithms to large data sets [2]. Moreover, bagging with $m \approx n$ can be expensive, but there are still many areas of machine learning where it is used, notably in Random Forests. Finally, our experiments also show that a modest number of bags ($B=200$) is all that we really need to start seeing major gains in selection stability. References: [1] Juan José Del Coz, Jorge Díez, and Antonio Bahamonde (2009). "Learning Nondeterministic Classifiers." *Journal of Machine Learning Research* 10(10). [2] Ariel Kleiner, Ameet Talwalkar, Purnamrita Sarkar, and Michael I. Jordan (2014). "A scalable bootstrap for massive data." *Journal of the Royal Statistical Society Series B: Statistical Methodology* 76(4): 795-816. Pdf: /pdf/441635a222fcd7af152f3362d1173d34726de43f.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper considers the stability of classifiers in multi-class classification. In the considered framework, the classifier is allowed to return a set of candidate classes instead of only one class. The stability is defined as the frequency of a set predicted by one classifier having no union with a set predicted with a classifier trained on the same training set with only one sample removed. The authors proposed the construction of a stable classifier in this framework using bagging and inflated argmax. The proposed approach is proven to have stability guarantees. Additionally, the effectiveness of the approach in terms of increased stability is confirmed in the empirical experiment. Strengths: - The paper is nicely written and easy to follow. - The inflated argmax is shown theoretically to be the maximizer of classifier stability. Weaknesses: - The authors argue that the argmax is a hard operator that may make the classifier unstable if even small changes of the outputs occur, but the definition of stability used in the paper also uses the hard operator of the set union being equal to an empty set. - The definition of the stable classifier as defined in this paper is new to me, and based on the paper, I struggle to find a good motivation for using this specific definition and focusing on it. - The method requires bagging, what makes it difficult to apply with heavy architectures. - The empirical results are limited to a single experiment. - The empirical experiment uses baselines that do not work in the framework of set-valued prediction and always predict only one class, which puts them at a clear disadvantage under the considered task. The comparison should include other set-valued classifiers. E.g., classifiers mentioned in related works or classifiers that optimize simple set-utility functions like in [1] and [2], that can be very efficiently optimized as discussed in [3]. [1] Del Coz JJ, Díez J, Bahamonde A. Learning nondeterministic classifiers. 2009 [2] Zaffalon M, Giorgio C, Maua DD. Evaluating credal classifiers by utility-discounted predictive accuracy. 2012 [3] Mortier T, Wydmuch M, Dembczynski K, Hullermeier E, Waegeman W. Efficient set-valued prediction in multi-class classification. 2021 NITs: - "The more recent literature has focused on producing a sparse set of weights, but none of these works offer a formal stability guarantee for the support of the weights." This statement needs citation. Technical Quality: 2 Clarity: 2 Questions for Authors: - What are the specific applications benefiting from this notation of stability? - How does the introduced inflated argmax compare with other set-valued classifiers? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations are discussed. I see no negative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *“The authors argue that the argmax is a hard operator… but the definition of stability used in the paper also uses the hard operator of the set union being equal to an empty set”* The argmax does not satisfy $\epsilon$ compatibility. The point of the paper is to introduce an $\epsilon$ compatible relaxation of the argmax, which can be combined with bagging to stabilize any classifier. This relaxation is necessary even though the definition of stability uses a hard operator. 2. *“The definition of the stable classifier … I struggle to find a good motivation for using this specific definition and focusing on it.”* and *“What are the specific applications benefiting from this notation of stability?”* Selection stability controls the probability that our algorithm makes contradictory claims when dropping a single data point at random from the training set. In set-valued classification, $\hat{S}$ represents the set of candidate labels returned by the model. This set should not totally change just by dropping a single observation from the training set. In our revision, we will expand Section 2 of the paper with a more extensive discussion motivating this definition. 3. *“The method requires bagging, what makes it difficult to apply with heavy architectures.”* Please see discussion point 2 in our global response. 4. *“The empirical results are limited to a single experiment.”* The main contribution of our paper is theoretical: we show how to guarantee selection stability for any classifier. Our emphasis is thus on the idea and the theoretical contribution. Our experiment in Section 4 and simulation in Section D.2 serve merely as illustrations of these results to show how they work in practice. 5. *“The empirical experiment uses baselines that do not work in the framework of set-valued prediction…”* and *“How does the introduced inflated argmax compare with other set-valued classifiers?”* Please see discussion point 1 in our global response. 6. *“‘The more recent literature…’ This statement needs citation.”* We should have said ‘the more recent papers in this line of work’ to make it clear that we were referencing the specific citations in the previous sentence. We will make this clearer in the revision.
null
null
null
null
null
null
Structured Multi-Track Accompaniment Arrangement via Style Prior Modelling
Accept (poster)
Summary: This paper introduces a novel music AI system that creates multi-track accompaniments from a lead sheet by leveraging disentangled style factors to enhance context coherence, creativity, and computational efficiency. The proposed two-stage process begins with generating a piano arrangement using piano texture styles, followed by orchestrating multiple tracks by incorporating orchestral function styles. The system employs vector quantization and a multi-stream Transformer for modeling orchestration styles, significantly improving arrangement quality and providing flexible control over various music genres and compositional levels. Strengths: 1. The paper addresses the significant task of Multi-Track Accompaniment Arrangement, making notable improvements in the field. 2. The authors have diligently constructed a comprehensive framework in both the experimental design and methodology, contributing valuable insights and data to the field. Weaknesses: 1. The introduction lacks clarity, with key terms and relationships not sufficiently defined before introducing mathematical notations and formulations. This obscures the motivation and makes it difficult to understand, especially as the abstract outlines challenges related to coherency, creativity, and efficiency that are not clearly addressed in the introduction. 2. The contributions of the paper, as discussed between lines 46-60 and 85-94, appear to be incremental rather than significant, primarily building upon work from AccoMontage. 3. There is a consistent omission of motivation when new concepts are introduced. For instance, the motivation behind formulating the multi-track accompaniment arrangement task is not made clear in the introduction after discussing style representations. Similarly, the reasons behind specific mechanisms in sections 3 and 4 are not elaborated upon. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why does concatenating style factors potentially contradict the structure of existing conditions as mentioned on line 25? 2. The paper discusses conflicts when combining style factors in sequence generation tasks; could these be resolved using different methods like diffusion models? 3. How do the initial discussions on introducing style into long-term generation relate to the paper’s main theme of multi-track accompaniment arrangement? 4. What prompted the formulation of the problem between lines 32-36 in paragraph 3, and how is time t determined? 5. Is there a distinction between 'piano reduction' and 'piano track'? The terminology varies between italics and normal text; does this signify a difference? 6. What is the rationale behind using VQ-VAE and VAE to encode orchestral functions and piano reductions in section 3.2? 7. What motivations underlie the construction of the Multi-Track Orchestral Function Prior in section 3.3? 8. Can you explain what inputs the proposed model receives and provide comparisons to highlight its unique features, as seen in Figure 4? 9. How does the model handle both ABC notation and Midi, and is there a conversion of ABC notation into the representation discussed in section 3.1? 10.Why is the computational efficiency of this model less optimal compared to [20]? 11. How are the 'style' factors controlled and tested in the experiments? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and constructive feedback! We acknowledge that some terms in our paper may not be sufficiently clear (**W1**). We also recognize our shortcomings in providing adequate motivation to justify our design choices (**W3**). Please allow us to first respond to the relevant points raised in **W2** and **Questions**. We hope this could address your concerns. We will then comprehensively incorporate these clarified ideas into our manuscript to provide sufficiently clear introduction and well-justified motivation. **W2. Significance of Contribution** Our proposed *style prior modelling* is not built upon AccoMontage. It is an original methodology that effectively addresses long-term, structured conditional sequence generation. AccoMontage addresses piano arrangement only, while our method is more scalable to solve more challenging practical problems. Building upon *style prior modelling*, we introduce the first whole-song, multi-track accompaniment arrangement system, which supports variable music length ($T$) and controllable instrumental tracks ($K$). **Q1. Why concatenating style factors may contradict the structure of existing conditions** A: The general problem that we study is *conditional sequence generation*. The input is a conditional sequence $\mathbf{c}\_{1:T}$ and the output is an observational sequence $\mathbf{o}\_{1:T}$. *We assume there underlies a style factor sequence $\mathbf{s}\_{1:T}$*, which can render/arrange the content of $\mathbf{c}\_{1:T}$ to realize $\mathbf{o}\_{1:T}$. By *style prior modelling* of $\mathbf{s}\_{1:T}$, we aim to improve interpretability, controllability, and performance. We note that the existing condition $\mathbf{c}\_{1:T}$ often implies a long-term structure. The style factor $\mathbf{s}\_{1:T}$, if not structurally aligned with $\mathbf{c}\_{1:T}$, will lead to incoherent results. **Q2. Can diffusion models address this?** A: Diffusion models and Transformers can be distinct implementations of the same methodology. The context dependency of diffusion models differs from that of Transformers and we will explore this in our future research. **Q3. Relation to accompaniment arrangement** A: Accompaniment arrangement is a typical task of *long-term* conditional sequence generation. The input is a lead sheet that implies a verse-chorus structure of the whole song. The output is the accompaniment, and the style factor regards the sequential *form* of the accompaniment. In this paper, we consider multi-track, whole-song arrangement, where it is challenging to maintain both track cohesion and structural coherence. **Q4. Prompt of the formulation in lines 32-36** A: Continuing from Q1 & Q3, our general idea is to model $\mathbf{s}$ conditional on $\mathbf{c}$. We thus come to the formulation of $p(\mathbf{s}\_{1:T}^{1:K} | \mathbf{c}\_{1:T})$. Superscript $k=1, 2, \cdots K$ denotes track indices of multi-track arrangement. Subscript $t = 1, 2, \cdots, T$ denotes component segments of the whole song. We consider each segment as a 2-bar music snippet, which is a proper scale for learning content/style factor representations. **Q5. *Piano reduction* vs 'piano track'** A: We use *piano reduction* to denote the overall content from a multi-track piece. We will unify all instances of *piano reduction* into italic text. On the other hand, a ‘piano track’ simply refers to a general track played by piano. **Q6. Rationale for VQ-VAE and VAE** A: We use VAE to learn the nuanced content from the *piano reduction* because existing works have shown VAE’s effectiveness for music representation learning. We further choose VQ-VAE for the *orchestral function* because common patterns of orchestral tracks (like syncopation, arpeggio, etc.) can naturally be categorized as discrete variables. Moreover, VQ-VAE learns a discrete latent space that facilitates a prior model to mount on. **Q7. Motivations for Multi-Track Orchestral Function Prior** A: By constructing Multi-Track Orchestral Function Prior, we *recover the underlying style factor sequence* that can further render/arrange the input content into multi-track accompaniment. This design is more interpretable and controllable while also adhering to music composition practice. Experiments also demonstrate superior performance against non-prior baselines. **Q8. Model inputs in Figure 4** A: In Figure 4, the model receives two inputs: a lead sheet shown by the “Mel” staff, and a set of instruments (user control) shown by the rest staff labels. The output is the accompaniment in the rest staves. The length of lead sheet determines $T$ and the number of instruments determines $K$. **Q9. ABC notation** A: ABC is essentially a score notation for lead sheet, which can be converted into MIDI and the representation in Section 3.1. **Q10. Computational Efficiency of GETMusic** A: GETMusic is a diffusion model with only 100 diffusion steps, thus achieving notable computational efficiency. **Q11. How the style factors are controlled and tested** A: The control over style factors is covered in lines 173-176. Firstly, user can customize the instruments to steer the Multi-Track Orchestral Function Prior. Moreover, a starting prompt can optionally be provided. In our experiment, these control choices are randomly sampled from Slakh2100 test set. The effectiveness of $\mathbf{s}\_{1:T}$ is manifested from the quality of the resulting $\mathbf{o}_{1:T}$. Experiments show that *our method achieves the top chord, structure, and DOA scores for long-term arrangement*, which demonstrates the effectiveness of style factors and the superiority of *style prior modelling*. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed and thoughtful response to my review. Your clarifications, particularly regarding the significance of your contribution and the motivations for the methodologies employed, provide a deeper understanding of your work. I look forward to seeing these enhanced explanations incorporated into the revised manuscript, as they will undoubtedly strengthen its clarity and justification.
Summary: The paper presents a style transfer-based music accompaniment generation system. It starts with the lead melody and fleshes out the accompaniment tracks based on various high-level information, such as instrument type and rhythmic structure. The main goal of the proposed model is to be able to generate coherent structure in long music sequences, while they are also cohesive. Model architecture adopts VQVAE baselines to learn latent representations while the sequence learning part is done via transformer models applied to tracks and time dimensions orthogonally. The objective and subjective test scores look promising. Strengths: - The paper presents reasonable intermediate representations of symbolic music, piano reduction and orchestral function, which are used to learn discrete embeddings by the VQVAE model. - The proposed model is evaluated by both objective and subjective manners, where it outperformed the other methods. Weaknesses: - The description of the proposed model is not organized well, so it is difficult to understand. - Some high-level choices the authors made are not well justified, such as discrete embeddings vs. continuous, Gaussian noise vs. other noise, etc. Technical Quality: 4 Clarity: 2 Questions for Authors: - In Fig. 1, the autoencoder takes the piano reduction information as its first input. I imagine it would correspond to the lead sheet. Does it mean that during the training time the output of the autoencoder is the full sheet with all tracks? Then this isn't an autoencoder, right? The precise training procedures are not detailed enough in this manuscript. - If the input PN is the full music during training, how does the model generalize to the test-time scenario when the input has to be only the lead sheet? - The manuscript seems to be based on the concept of 1 segment = 8 beats. Is this correct? Any rationale behind this choice? - Gaussian noise is used to regularize the model. But if it's to deal with the domain shift between piano reduction representations, wouldn't there be a different type of noise that's more suitable, i.e. more discrete random variables? - It appears that the "s" embeddings contain important information about the track to be generated. Since it's learned in an unsupervised way, it's a little hard to imagine how these embeddings are more interpretable than other existing conditioning vectors as the authors claim. Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: - Basically it's not clear how Fig. 1 and Fig. 2 are combined to form the pipeline described in Fig 3. Colored figures, shades, and hashes show the authors' effort in describing this method, but it's not entirely clear to me. - It appears that the system takes the user input to designate which instrumental track to generate based on the instrument embedding. It's a nice feature, but it also means that, for evaluation, somebody has to come up with a nice orchestration (i.e., which instruments go well given the lead melody). Was this critical selection of instruments done by the authors to generate the test sequences? Unless I missed this part, it has to be clearly mentioned, as it can affect the performance of the model (in comparison to other methods that don't have such a concept). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and constructive feedback! Please allow us to first respond to the points raised in **Limitations** and **Questions**. We hope this could address your concerns. We will then comprehensively incorporate these clarified ideas into our manuscript to provide sufficiently clear organization (**W1**) and well-justified motivation (**W2**). **L1. Elaboration on the model architecture** A: In this paper, we introduce a two-stage system to address the challenging task of multi-track accompaniment arrangement. Stage 1 arranges lead sheet into piano, and Stage 2 arranges piano into multi-track full sheet. Fig. 3 in the manuscript demonstrates the two-stage pipeline, which sequentially arranges piano/full sheets using respective style factors. The technical method of this paper mainly focuses on Stage 2, where we use the autoencoder (Fig. 1) to disentangle *piano reduction* $\mathrm{pn}[\mathbf{x}]$ (content factor) and *orchestral function* $\mathrm{fn}[\mathbf{x}]$ (style factor) from full sheet $\mathbf{x}$. In Stage 2, $\mathrm{pn}$ is given as condition and we propose the prior model (Fig. 2) to infer $\mathrm{fn}$. **L2. User control on instrument designation** A: Our system does rely on user designation of instrumental tracks, and we see it as an intuitive and handy control. In practice, users can try out preset instrument ensembles (e.g., pop band with piano, guitars, and bass) without added burden. In our experiment, without loss of generality, this control choice is randomly sampled from Slakh2100 test set (mentioned in line 222). **Q1. Input/output of autoencoder and training detail** A: The autoencoder takes two inputs: the *piano reduction* $\mathrm{pn}[\mathbf{x}]$ and the *orchestral function* $\mathrm{fn}[\mathbf{x}]$. The output is the full sheet $\mathbf{x}$. Note that both $\mathrm{pn}[\mathbf{x}]$ and $\mathrm{fn}[\mathbf{x}]$ are deterministic transforms from $\mathbf{x}$. This is an inductive bias for content/style disentanglement. The ultimate input is $\mathbf{x}$ and the training is based on a self-supervised reconstruction objective. We can see similar autoencoder designs in other disentanglement works [1] as well. **Q2. How the model generalizes to the test-time scenario** A: At test time, the autoencoder takes the piano arrangement from Stage 1 as its first input. **Q3. The rationale behind segment scale** A: Yes. We consider 1 segment = 8 beats. This is a proper scale (i.e., neither too short nor too long) to capture composition structures. Existing studies on music representation learning have also applied the 8-beat scale in their work [2,3]. **Q4. The rationale for Gaussian noise against discrete ones** A: We note that the *piano reduction* encoder learns *continuous* content representations instead of discrete ones (vector quantization is not applied here). Music content can be nuanced and thus better described with continuous variables. We hence see Gaussian noise as a natural choice for the domain shift regularization. **Q5. How the $\mathbf{s}$ embeddings are interpretable** A: The $\mathbf{s}$ embeddings are encoded from *orchestral function* $\mathrm{fn}[\mathbf{x}]$, which essentially describes the *form*, or *layout*, of multi-track music $\mathbf{x}$. It contains rhythmic intensity information, telling the model where to put more notes and where to keep silent. When learning $\mathbf{s}$ embedding from $\mathrm{fn}[\mathbf{x}]$, we apply vector quantization because common rhythmic patterns (like syncopation, arpeggio, etc.) can naturally be categorized as discrete variables. Moreover, VQ-VAE learns a discrete latent space that facilitates a prior model to mount on. [1] Z. Wang, et al. Audio-to-symbolic arrangement via cross-modal music representation learning. ICASSP 2022. [2] A. Robert, et al. A hierarchical latent vector model for learning long-term structure in music. ICML 2018 [3] R. Yang, et al. Deep music analogy via latent representation disentanglement. ISMIR 2019.
Summary: The paper suggests creating multi-instrument accompaniment by using the piano reduction and the instrument note density (referred to as the 'Orchestration Function') as bootstrap representations. By effectively applying VAE and an autoregressive sequence generation framework, the paper demonstrates the high potential and effectiveness of the proposed approach for learning musically structured accompaniment. The paper also proposed a 'layer interleaving architecture' that processes the orchestration codec by alternating between the time axis and the track axis. The paper compares with previous works with objective and subjective evaluation and shows its validity. Strengths: The strength of this paper, I believe, lies in the factorization of accompaniment generation by leveraging the fact that piano reduction and 'orchestration function' can be freely produced as middle-level features from multi-track MIDI. By separating each process, as the authors claim, it becomes applicable to scenarios where users can exert control. Additionally, by dividing the modeling of each accompaniment's prior and the modeling of detailed notes, the efficiency of learning for each model is expected to have increased. Weaknesses: The proposed methodology utilizes piano reduction as a condition, so the quality of the final orchestra accompaniment is expected to vary depending on the quality of the piano reduction. Upon listening to the provided examples, it is evident that notes not present in the piano reduction were added, confirming that the proposed model's role extends beyond merely rearranging the notes of the piano reduction. However, there needs to be a discussion on the impact of using piano reduction on the model's high evaluation compared to other models. Additionally, while it was mentioned in L168 that another module was used, a more detailed explanation of how the piano reduction was generated is necessary. Technical Quality: 4 Clarity: 4 Questions for Authors: Q1. From my understanding, the 'orchestration function' in Eq. 1 eliminates pitch information. Thus, I expect the priors (${s_t}$) to provide note-density-like bare information. What do you expect to be encoded in $s$? Also, as a follow-up question, what kinds of correlation are expected along with $s_t$, and why does it make sense to train a sequential prediction model for $s$? I'm not sure about the expected role of the prior model. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: I generally agree with the authors about the limitation presented in Appendix E. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and thoughtful feedback! We hope the following will address your concerns: **Weakness: Discussion on the impact of the piano reduction quality** A: We introduce *piano reduction* as an intermediate representation from the input lead sheet to the final orchestra arrangement. Intuitively, *piano reduction* is a hierarchical and more abstract planning of the final orchestra. The orchestrator at Stage 2 may add notes, but it still falls within the scope implied by the *piano reduction*. To formally investigate the impact of *piano reduction*, we conduct an ablation study by replacing the original piano arranger with the *Whole-Song-Gen* model [1], which, to our knowledge, is the only existing alternative that can handle a whole-song structure. The ablation study is conducted in the same setting as Section 5.3. We report objective evaluation results regarding the final orchestra’s *chord accuracy*, *structure awareness*, and *degree of arrangement* (*DOA*) as follows: | | *Chord Acc* $\uparrow$ |*Structure* $\uparrow$ |*DOA* $\uparrow$ | | -------- | ------- | ------- | ------- | | Ours | $0.567 \pm 0.014^a$ | $1.520 \pm 0.030^a$ | $0.300 \pm 0.004^a$ | | Using *Whole-Song-Gen*| $0.509 \pm 0.015^b$ | $1.121 \pm 0.006^b$ | $0.277 \pm 0.006^b$ | We can observe that *Whole-Song-Gen* at Stage 1 generally deteriorates the quality of the final orchestra. To see why this happens, we further compare *Whole-Song-Gen* with our original piano arranger exclusively on the piano arrangement stage. We report objective evaluation results regarding the piano’s *chord accuracy* and *structure awareness* as follows: | | *Chord Acc* $\uparrow$ |*Structure* $\uparrow$| | -------- | ------- | ------- | | Original piano arranger| $0.540 ± 0.016^a$ | $1.983 ± 0.147^a$ | | *Whole-Song-Gen*| $0.430 ± 0.020^b$ | $1.153 ± 0.18^b$ | By comparing the two tables, we can see that a higher-quality *piano reduction* generally encourages a more musical and creative final orchestra result. Particularly, *piano reduction* *lays the groundwork for (at least) chord progression and phrase structure*, both of which are important for capturing the long-term structure in whole-song arrangement. Overall, we confirm that the *piano reduction* is an abstract planning of the final orchestra result. Thus its quality is positively correlated to the orchestra quality. Meanwhile, we see that *our current piano arranger significantly outperforms existing alternatives and guarantees decent piano quality*, thus being the best choice for our model. Additionally, both the piano arranger at Stage 1 and the orchestrator at Stage 2 can be applied as independent modules to address respective subtasks. [1] Z. Wang, et al. Whole-song hierarchical generation of symbolic music using cascaded diffusion models. ICLR 2024. **Q1. What information is encoded in $\mathbf{s}$** A: The $\mathbf{s}$ embeddings are encoded from *orchestral function* $\mathrm{fn}[\mathbf{x}]$, which essentially describes the *form*, or *layout*, of orchestra sheet $\mathbf{x}$. It contains rhythmic intensity information, telling the model where to put more notes and where to keep silent. **Q2. What kind of correlation to be expected along with $\mathbf{s}\_{1:T}$** A: $\mathbf{s}\_{1:T}$ encodes the sequential *form* of orchestra sheet. As $t$ goes from $1$ to $T$, *with the development of music*, we may see new rhythm patterns being introduced and new instrumental tracks being activated (e.g., in Figure 4 of the manuscript, the piano track is activated in the second half of the piece to mark increased atmosphere). More importantly, the lead sheet (and piano reduction) $\mathbf{c}\_{1:T}$ implies a verse-chorus structure of the whole song. Our prior model infers $\mathbf{s}\_{1:T}$ conditional on $\mathbf{c}\_{1:T}$, thus guaranteeing the structural alignment between $\mathbf{s}\_{1:T}$ and $\mathbf{c}\_{1:T}$.
Summary: The authors introduce a new model for generating multi-track accompaniment given a lead sheet. While being strictly a two-stage method (first generate a piano arrangement from the lead sheet and *then* generate the accompaniment), the authors' contribution is focused on the 2nd stage, and they rely on existing modules for the 1st stage. Their proposed method features two key components: **first**, a VQ-VAE submodule whose aim is to learn (quantized) representations of "orchestral functions" (orchestral functions being 1-D time-series representations of a track, with each element representing the sum of notes active in each segment, more on that in weaknesses), and, **second**, a transformer-based "mixing" VAE which combines the representations of the VQ-VAE (i.e., the representations of the orchestral functions) with learnt representations of "piano reductions" of each song (derived by averaging all individual tracks) to generate latent representations which are used to synthesize the full song. A key novelty of their model is the "mixing" part of the VAE, with the decoder utilizing interleaved time-wise and track-wise cross-attention layers -- meaning, that each output "sees" information both from past frames in its own track and from other tracks. Their system compares favorably to 3 standard baselines taken as-is from prior work or reproduced. Strengths: Overall, barring some presentation issues discussed below, this is a well-written paper that features state-of-the-art results and enough novelty to make it relevant for both the broader generative AI community (given the difficulty of generating long-form, complex music which the authors tackle with hierarchical modelling) and the more niche music-generation community. This is why I am leaning positive towards accepting the paper. I will mention some more of its strengths in detail: + The authors target a challenging task which is to generate a multi-track arrangement given a lead sheet. Even though they "only" focus on the 2nd stage of that pipeline, namely the generation of the multi-track arrangement given a piano arrangement, it is still a valid contribution. + The idea to disentangle the piano reduction and orchestral function and combine them in the way they do is interesting and novel. This also provides a nice element of controllability. + The fact that the model can be trained in "self-supervised" fashion (i.e. without manual annotations) is commendable. + The authors have done a thorough job of documenting experimental parameters (data, GPU runtime, etc.) and submit code to reproduce their experiments. + The authors have accounted for different sources of variability and provided multiple "knobs" to twist (and counteract domain mismatches) through the clever use of multiple "positional" embeddings (Fig. 2). Weaknesses: The main weaknesses of the paper are primarily in terms of presentation, rather than methodology or evaluation. My comments are mainly meant to help the authors improve their presentation. * Clarity of music concepts: Sometimes, the authors introduce or reference concepts which are not straightforward without domain knowledge. Examples of that are: * The orchestral function of x is introduced as a column sum of activation indicators over the different columns of X. Given that the columns of X represent MIDI pitches, this would mean that at each frame, the authors are summing the notes that are active. Yet, in p.3, l.115, they write that this indicator function "counts each note onset position as 1". Why onset only? This would mean that each note is counted once. But the equation states $x_t^k>0$, which appears to be evaluated for each frame (thus, not only at the onset of each note, but for its entire duration). Further, they state that "it [orchestral function] describes the rhythm and groove of each track". It is not at all clear to me why this orchestral function should describe the rhythm? It only appears to describe the amount of notes active at a given time. It's even less clear why it should describe the groove, given the subjective nature of "groove". It is okay that the authors attempt to give a layman's explanation to their method in this part, but it is also recommended that they better describe those concepts (and do so in a way that would be acceptable even to those that are not familiar with concepts from music theory). * There are other terms, such as "counterpoint", "phrase", etc. which will not be evident in an audience not familiar with them. While it is understandable that in a music generation paper, there will be a lot of domain language, it would be useful if the authors commented on their importance in their discussion of results. For example, the ILS metric measures the similarity within a music phrase vs the rest of the song. What does it mean that the authors' method does better than the baselines in this metric? That it produces more coherent phrases that are more easily distinguishable from the rest of the song? Adding a sentence to clarify the importance of each result (in, as far as this is possible, layman's terms) would improve the readability and outreach of the paper * Comparability to baselines: The authors compare to three established baselines. However, with the exception of PopMAG, which the authors reproduce on part of the data they used to train their own model, the other two baselines are not strictly comparable. AMT requires ground-truth accompaniment (arguably a limitation of that method compared to the authors'), and GETMusic was trained on a different data (and additionally handles different instrumental tracks). While this is not a no-go, and is common practice for generative methods in general, it should be mentioned in 5.2. * The description of *interleaving* layers in the autoencoder -- according to the authors themselves, a major novelty of the work -- does not feature as prominently in their text as it should (given its stated importance), and could be improved in terms of clarity. Specifically, it is not entirely clear what the "Track Encoder" does in Fig. 2. Is it a standard residual "transformer" (it is not clear from A.2 if a residual connection is there) layer that essentially combines information across tokens? It would be beneficial to describe this in 3.3 using simple equations like in 3.2. One important open question is why this track encoder, if it is indeed based on self-attention, is different than the decoder? Specifically, why does one integrate information in the time-axis and the other on the track-axis? It would be important to show the dimensions here. I guess you have: $(B, K, N, P)$, where B is the batch size, K the number of tracks, N the duration of each note, and P the number of notes (I guess N&P are eventually downsampled from their original dimensions). So the inter-track encoder would operate on tensors $(B\cdot N, K, P)$ and then the intra-track decoder would operate on tensors $(B\cdot K, N, P)$? All this of course would be done in autoregressive fashion, so it would go from $\{1...N\}$. It would be nice to show this using the notation of the authors. * The VQ-VAE is described (A.1) and portrayed (Fig. 1) as having a convolutional encoder with a stride of 4 (thus a downsampling of 4) and a fully-connected decoder. It would be useful to show how this fully-connected decoder mitigates the downsampling of the encoder to have an output size equal to the input. Minor comments: + p.7, l.240: Best substitute "denoising process" with denoising diffusion probabilistic model (if this is what this meant), as the term denoising can be ambiguous Technical Quality: 4 Clarity: 3 Questions for Authors: There is only one question, regarding the interleaving of layers, but since it is also a weakness, I have included it there instead. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors have adequately discussed their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and constructive feedback! We value your thorough and insightful comments and will revise our manuscript based on these suggestions to improve the presentation. We also hope the following could address your existing concerns: **Weakness 1. Clarification of music concepts** A: Yes, we see the importance of clarifying music-related terms to make our paper more comprehensible. Certain terms (like 'groove') may have slightly varied definitions in music and AI research. We will use them in the research context and make the best effort to explain them well. We provide an explanation to the raised points as follows: (Count each note onset position as 1) - In this paper, we represent MIDI tracks using the *modified* pianoroll representation [1]. Instead of spreading out the full duration of a note, each non-zero $\mathbf{x}_t^k =1, 2, \cdots, 32$ denotes the duration of a note at timestep $t$ and track $k$. Hence the indicator function $\mathbf{x}_t^k>0$ recovers the note onset positions. (*Orchestral function*, rhythm, and groove) - Our intuition with *orchestral function* is inspired by the "grooving pattern" introduced in [2], the essence of which is to use note onset densities (in our case, intensities) to describe rhythms. Groove admittedly has a subjective nature, but it is generally associated with the rhythm. In our paper, we mention "groove" in the same context as [2] and we will clarify this in our future revision. [1] Z. Wang, et al. Learning interpretable representation for controllable polyphonic music generation. ISMIR 2020. [2] S.-L. Wu and Y.-H. Yang. The jazz transformer on the front line: Exploring the shortcomings of ai-composed music through quantitative measures. ISMIR 2020. **Weakness 2. Clarification with other terms** A: Yes, we see the benefits of interpreting the essence of specific terms that may otherwise be too abstract to comprehend. Your interpretation of the ILS metric is well aligned with our intention. We will follow this example to elaborate on the other terms. **Weakness 3. Comparability to baseline** A: We will clarify our model’s comparability to each baseline in Section 5.2. **Weakness 4. (Q1.) Interleaving Layers** We will improve the presentation of layer interleaving in our future revision. Regarding model dimensions, we have tensors $(B, K, T, D)$, each being batch size, track number, time frames (downsampled), and feature dimension. The inter-track encoder operates on tensor $(B \times T, K, D)$, and the autoregressive decoder on $(B \times K, T, D)$. We note that the inter-track encoder is not autoregressive, but a standard residual Transformer Encoder layer. This gives the user full control of initializing the number ($K$) and instruments of tracks. We will supplement the missing details in the VQ-VAE framework and revise the ambiguous terms. Thank you again for your constructive advice on the paper presentation!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Meta-Controller: Few-Shot Imitation of Unseen Embodiments and Tasks in Continuous Control
Accept (poster)
Summary: This paper presents a method for few-shot imitation learning for new embodiment. Specifically, it presents a method that learns a state representation that decouples embodiment-specific and task-specific knowledge and a meta-learning framework that transfers between embodiments and tasks. Results suggest that the resulting policy improves over decision transformer-based or modular policy-based baselines. Strengths: 1. A design choice (separating the state encoder from the matching-based policy network) that decouples state learning from task-specific knowledge learning. 2. The proposed method demonstrates better generalization to unseen embodiments in DMC, outperforming existing baselines. Weaknesses: Only demonstrates results in DMC. The reviewer is not certain if this scales to real-world control environments, as the different embodiments operate at various scales, and the model needs to be able to be robust to environment dynamics (here the environment is static). The reviewer acknowledges that this limitation is also mentioned in the limitation section. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Given that state representation learning is decoupled from policy representation learning, it would be interesting to visualize the embedding learned by the state encoder and see how distinct it is between different morphologies. 2. Does the performance of the model scale with model capacity and data? I.e. currently the model is pre-trained with “28 tasks from 9 embodiments … up to 2000 demonstration trajectories for each task and embodiment”. What if some of the tasks and embodiments are removed? Do some embodiments/tasks matter more than others? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation section is present. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1.** Given that state representation learning is decoupled from policy representation learning, it would be interesting to visualize the embedding learned by the state encoder and see how distinct it is between different morphologies. We provide a t-SNE visualization of embedding space of features obtained by structure encoder in Figure R.1 of author rebuttal. First, we can observe that the embeddings are clustered by the joints of each embodiment, and the clusters corresponding the same embodiment are located nearby. Also, we note that the embeddings of slide joints (e.g., cartpole, cartpole_two, cup, pointmass) and the embeddings of hinge joints (e.g., reacher, reacher_three, walker, acrobot, cheetah, pendulum) are separated in the right and the left regions. Thus the state encoder captures both embodiment-specific and joint-specific knowledge to give rich features to the policy network. > **Q2.** Does the performance of the model scale with model capacity and data? To address the reviewer’s question, we conducted additional experiments to validate the effect of data scale and how different compositions of meta-training dataset affects the performance of downstream behavior cloning. Specifically, we select 3 combinations of training tasks, where we remove 4 embodiments from the original 10 embodiments. Then, we performed 5-shot behavior cloning experiments on the 8 tasks presented in the main table. According to Table R.4 of the author rebuttal, we observe that using all embodiments outperforms the baselines in most tasks, indicating that a diverse set of embodiments makes the model robust to unseen embodiments. Our findings also show that it is crucial to include embodiments with morphology and dynamics similar to the downstream tasks in the meta-training dataset. For example, removing the reacher_three task (as seen in row 2 and row 3) significantly drops the performance of the reacher_four task. This result reveals that embodiments with similar dynamics or morphological features can facilitate more effective knowledge transfer during meta-testing, suggesting that the diversity of data greatly impacts performance. > **W1.** Only demonstrates results in DMC. The reviewer is not certain if this scales to real-world control environments, as the different embodiments operate at various scales, and the model needs to be able to be robust to environment dynamics (here the environment is static). Our experiments were conducted exclusively within the DeepMind Control Suite (DMC) because, to our knowledge, it is the only benchmark currently available that includes both diverse embodiments and tasks. Although our experiments lack real-world validation, we clarify that our primary contribution is the introduction of a fundamental framework for simultaneous generalization to unseen embodiments and tasks with few demonstrations, which we believe addresses a significant challenge in the field. To address concerns about the transferability of our approach to real-world control environments, we have extended our experiments to simulate more realistic conditions. Specifically, we introduced varying levels of noise to the control actions within the DMC environments, thereby making the transition dynamics both stochastic and noisy. As illustrated in Figure R.5 of the author rebuttal, the performance of our Meta-Controller does not significantly drop despite the presence of random noise. This result underscores the potential of our method to operate effectively under realistic conditions that involve complex dynamics and noise. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response. I maintain my positive rating.
Summary: This paper aims to generalize to unseen embodiments and tasks by few-shot behavior cloning. It proposes a modular framework to capture both shared knowledge across all embodiments and embodiment-specific knowledge. It utilizes a matching-based method to enhance the robustness to overfitting. It shows superior performance in DMC simulations. Strengths: * The frameworks proposed by the paper are reasonable. In order to generalize to different embodiments and share common knowledge, it designs embodiment-specific parameters and shared parameters for networks. * The paper demonstrates strong performance compared with the baselines. Weaknesses: * The models are complex and matching-based policies are time-consuming. The models consist of multiple different transformer blocks. The matching-based policies need to compute similarities with all pairs in demonstrations to predict one action. * The environments are too simple. The paper only focuses on the DeepMind Control suite and they are only state-based environments. What about the performance when the environments are image-based or manipulation tasks? Technical Quality: 3 Clarity: 2 Questions for Authors: * See weakness * What is the learning curve for the models compared with PDT+PEFT? Are the models more efficient since they are modular networks? * The experiments only compare the results of at least 5-shot. What about the performance when given fewer demonstrations? * The episode sampling strategy is confusing. The claim "D and Q are obtained from adjacent training epochs" limits the adaption of the models. "a temporal segment size of 10 for episodic meta-learning". Does it mean that the horizon of each few-shot demonstration is 10 and is it enough to provide task information? It is better to provide more details about few-shot demonstrations. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: * The paper has provided several limitations including simulations, highly stochastic dynamics, long-term planning and computational complexity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1.** What is the learning curve for the models compared with PDT+PEFT? We provide learning curves of Meta-Controller and PDT+PEFT in Figure R.2 of the author rebuttal. As the reviewer pointed out, due to the modular nature of our structure encoder, our model not only achieves better performance but also converges much more quickly than PDT+PEFT in every task. > **Q2.** Performance when given fewer demonstrations? We present 3-shot behavior cloning results of baseline models and Meta-Controller in Table R.3 of the author rebuttal. We observe that our model outperforms all the baselines in most of the tasks, demonstrating its adaptability on unseen embodiments and tasks in a few-shot regime. > **Q3.** The episode sampling strategy is confusing. Our meta-training dataset consists of a replay buffer of an expert agent, collected throughout the agent’s learning period, which means the behavior or policy in the buffer can vary significantly. Initially, the expert agent's policy is sub-optimal and it improves over time. We observe that this leads to diverse policies for a single task. Our matching framework, however, requires consistency between input states and output actions in both support and query data. To ensure this consistency, we sample episodes from similar training periods within the replay buffer. For instance, in the hopper-stand task with 1000 trajectories sorted by training period, we select a query trajectory (e.g., the 100th) and support trajectories within a neighboring range of size 10 (i.e., from the 95th to the 105th). The “temporal segment size 10” means this range. This ensures that the support and query data are temporally close, providing consistent and relevant task information. > **W1.** The models are complex and matching-based policies are time-consuming. We clarify that transformer architectures are commonly adopted in the literature, and the complexity of our architecture would not be a significant weakness. Notably, recent works in the field use much larger transformer-based architectures than ours, such as large language models (LLMs) [3, 5] and vision transformers (ViTs) [4, 5]. In Table R.5 of the author rebuttal, we compare the inference time of our model and VC-1 [4], which uses a ViT-L backbone. Our model achieves faster inference times than VC-1. Although we couldn’t evaluate RT-2 [5] due to lack of official code, it involves a large network with at least a 40-layer ViT and a LLM with 3B parameters, requiring much higher computation costs compared to ours. Regarding the matching process, it is implemented with a single cross-attention layer, constituting only a small part of the overall computation. As shown in Table R.6 in the author rebuttal, most inference time is occupied by the encoders and decoders. In resource-constrained robotic platforms, transformer-based architectures may hinder real-time inference, as discussed in Appendix A. However, advancements in optimizing transformers, such as sparse attention mechanisms, model pruning, and knowledge distillation, can enhance inference speeds. Since these techniques are orthogonal to our contribution, they can be naturally incorporated into our method for resource-constrained robotic platforms, although this is not the main focus of this paper. > **W2.** The environments are too simple, only focuses state-based environments. Performance of image-based or manipulation tasks? First, we clarify that our primary contribution is demonstrating simultaneous generalization to unseen embodiments and tasks using few-shot learning, which is a significant challenge in the field. This necessitated using the DeepMind Control Suite (DMC), the only dataset providing both diverse embodiments and tasks to our knowledge. Other datasets are limited in either range of embodiments [1] or tasks [2], or both. Although DMC might appear simple, it enables rigorous testing of our framework's core capabilities. Similar research, such as PromptDT and MetaMorph, used even simpler settings. To our knowledge, our setting represents one of the most challenging for few-shot out-of-distribution generalization in behavior cloning. Second, our method is inherently designed for state-based environments, where "state" refers to proprioceptive sensors attached to each joint. Extending our method to image-based environments is beyond the scope of this paper. It is common in the literature to focus on state-based environments [2, 7], as many real-world robots are equipped with proprioceptive sensors. Regarding "manipulation tasks," we interpret this to refer to 3D robotic arm manipulation tasks, which are also state-based environments. If our interpretation is incorrect, please let us know. Conducting experiments in these environments is feasible but challenging due to the limited availability of diverse and comprehensive 3D datasets. Most available datasets focus on specific joint compositions like robotic arms or quadrupeds, making it difficult to gather sufficient meta-training data to acquire transferable knowledge about 3D embodiments. This limitation reflects the current state of available resources rather than the capability of our approach. Nonetheless, we recognize the importance of demonstrating the generalizability of our approach in more complex settings, and additional experiments in the author rebuttal may give some insights to this direction. Table R.2 shows that our method still performs well in more diversified embodiments. In Table R.5, we demonstrate that our method is robust to noise, highlighting its potential in noisy and stochastic real-world scenarios. Finally, as shown in Table R.4, our method's performance improves when we add more diverse embodiments and tasks in the meta-training dataset. This suggests the potential of our method in more complex scenarios, such as those with noisy transition dynamics or emerging diverse variants of embodiments, as more diverse datasets become available. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I agree that currently no simulation or benchmark considers diverse embodiments and tasks. * Most simulations are related to other controlling patterns. Do you think it's still important to consider diverse embodiments since most robot arms can share similar end-effector control? * Can this work provide some insights for other settings like end-effector control or mobile robots? --- Reply to Comment 1.1.1: Comment: We appreciate the comment. We address each of questions as follows: > **Q1.** Most simulations are related to other controlling patterns. Do you think it's still important to consider diverse embodiments since most robot arms can share similar end-effector control? **A1.** We think that for simple robots, such as robot arms with serial manipulator and similar end-effectors, the end-effector-based control can be a reasonable alternative to handle diverse morphologies. However, we would like to emphasize that our research is designed to address a wider range of multi-joint robots, such as the ones including snakebots [8], crawler robots [9], and quadrupeds [10]. These have diverse control patterns and complex morphologies (e.g., non-serial manipulators) that computing inverse kinematics for end-effector-based control is often challenging. We believe that considering a variety of embodiments is important because it allows us to tackle challenges across different types of robots, not just those with similar end-effectors. > **Q2.** Can this work provide some insights for other settings like end-effector control or mobile robots? **A2.** Our approach may also provide valuable insights for settings like end-effector control or mobile robots, particularly in scenarios where robots are customized for specific applications. For example, our method could facilitate data-efficient learning of controllers for new custom robots with unique joint configurations. Additionally, developing and deploying control solutions for new robots with complex kinematics, including end-effector-based control or mobile robots, can be costly and time-consuming. Our approach, which learns controllers from a limited number of expert trajectories, could offer a more efficient and cost-effective alternative in these settings. [8] Pettersen et al., “Snake robots”, Annual Reviews in Control, Volume 44, 2017 [9] Orita et al., “Robust Human Tracking of a Crawler Robot”, Journal of Robotics and Mechatronics Vol.31 No.2, 2019 [10] Fan et al., “A Review of Quadruped Robots: Structure, Control, and Autonomous Motion”, Advanced Intelligent Systems, Vol. 6, 2024
Summary: This paper introduces a framework called Meta-Controller for few-shot behavior cloning that can generalize to unseen robot embodiments and tasks in continuous control problems. The key contributions are: 1. A joint-level input/output representation to unify state and action spaces across heterogeneous robot embodiments. 2. A novel state encoder with two components: - A structure encoder to capture morphological knowledge - A motion encoder to capture dynamics knowledge Both use adaptive parameters to specialize to specific embodiments and tasks. 3. A matching-based policy network that leverages a few demonstrations to predict actions for new tasks. 4. A training protocol involving episodic meta-learning followed by few-shot fine-tuning. The authors evaluate Meta-Controller on various tasks from the DeepMind Control suite, demonstrating superior few-shot generalization performance compared to existing modular policy learning and few-shot imitation learning approaches. Key results show Meta-Controller outperforms baselines, especially on challenging tasks like the reacher-four embodiment. Ablation studies validate the importance of the structure encoder, motion encoder, and matching module. Strengths: This paper has several strengths: - To the best of my knowledge, the proposed approach is novel. The Meta-Controller framework addresses an important challenge in robotics - simultaneous generalization to unseen embodiments and tasks with few-shot learning. This is a solid step beyond existing work that typically focuses on either embodiment generalization or task generalization, but not both. To solve the problem, this paper introduces a well-thought-out architecture that combines several innovative components: - Joint-level I/O representation for handling heterogeneous embodiments - Structure and motion encoders with adaptive parameters - Matching-based policy network for few-shot adaptation - The evaluation is thorough. The authors conduct extensive experiments using the DeepMind Control suite, comparing their approach against multiple baselines from both modular policy learning and few-shot imitation learning domains. The paper also includes comprehensive ablation studies that validate the importance of each component in the architecture. - The performance is strong. Meta-Controller consistently outperforms existing methods, especially on challenging tasks like the reacher-four embodiment, demonstrating its effectiveness. - The paper is well-structured and clearly written, making it accessible despite the complexity of the topic. And the figures are nice and neat. Weaknesses: While the paper has many strengths, there are also some potential weaknesses or areas that could be improved: - The proposed method assumes the number of joints is equal to the action dimension. However, this assumption may not always hold, as many robots incorporate passive joints, such as in four-bar linkages. Please correct me if my understanding is wrong. - The proposed method appears to share conceptual connections to some retrieval-based techniques [1,2]. A comparative empirical evaluation with these related methods could provide helpful insights. - Limited real-world validation: As the experiments were conducted exclusively within simulated environments (the DeepMind Control suite), open questions remain regarding the approach's transferability to real-world robotic systems incorporating more complex dynamics and noise. - Limited failure case discussion: Analyzing scenarios in which the Meta-Controller performs less effectively could offer perspectives for future enhancements. A more in-depth failure case analysis may strengthen the presentation. [1] Pari, Jyothish, et al. "The surprising effectiveness of representation learning for visual imitation." arXiv preprint arXiv:2112.01511 (2021). [2] Sridhar, Kaustubh, et al. "Memory-consistent neural networks for imitation learning." arXiv preprint arXiv:2310.06171 (2023). Technical Quality: 3 Clarity: 4 Questions for Authors: - In the case of robot arm manipulation, what are the benefits of utilizing the proposed structure-motion encoder over the direct application of end-effector-based control? - Could you clarify what is meant by the "Embodiment-specific positional embedding" as depicted in Figure 2? How do you implement it? - It appears that the action $m_j$ is conditioned solely on the feature of joint $j$ ($z_j$). Why not consider the state features from all joints to simultaneously generate motions for each joint? - What necessitates the use of an action encoder and decoder? Why not simply retrieve actions based on their similarity? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1.** In the case of robot arm manipulation, what are the benefits of utilizing the proposed structure-motion encoder over the direct application of end-effector-based control? Compared to end-effector-based control, our method eliminates the need for separate low-level controllers manually tuned for each robot and simplifies the control process by learning a unified representation adaptable to various tasks and embodiments. By leveraging joint-level representations and the structure-motion state encoder, our framework seamlessly adapts to different robot morphologies and dynamics, ensuring robust performance across a wide range of embodiments. This is crucial for developing versatile and adaptive robotic systems. > **Q2.** Could you clarify what is meant by the "Embodiment-specific positional embedding" as depicted in Figure 2? As explained in Section 3.2.1 (line 132-147), the "Embodiment-specific positional embedding" refers to a mechanism to incorporate specific information about the robot's physical configuration into the model. This embedding accounts for the unique positional relationships and characteristics of each joint within a given robot embodiment. Similar to positional encoding in transformers, it is implemented by adding a learnable vector to each joint's state token. > **Q3.** Why not consider the state features from all joints to simultaneously generate motions for each joint? The architecture design of our state encoder allows the structure encoder to contextualize the joint representation considering all joints, while the motion encoder captures their temporal dynamics. By applying these two encoders in sequence, we can encode motion features that incorporate both spatial and temporal information, in a more computationally efficient way than using full spatio-temporal interactions. This approach is similar to axial attention techniques [6] used in video models, which are widely adopted for their efficiency and effectiveness in handling spatio-temporal data. > **Q4.** What necessitates the use of an action encoder and decoder? **W2.** The proposed method appears to share conceptual connections to some retrieval-based techniques. While matching is effective for few-shot learning, applying it directly to the raw action space imposes a strong constraint: the output action must be a convex combination of the support actions. To alleviate this constraint, we introduce an action encoder and decoder, allowing matching in the latent space rather than the raw label space. This enhances the expressiveness of the policy network, enabling adaptation to various unseen tasks with non-convex relationships between states and actions. Additionally, by encoding actions along the temporal axis, the model can construct a pool of transferrable action features related to local motor skills, which facilitates efficient transfer to unseen tasks that share modular skills but involve different skill combinations. We note that retrieval-based techniques suggested by the reviewer are similar to our matching-based policy unless they interpolate (or hardly select) the raw actions. To validate matching in the latent space, we conducted an ablation study by removing action encoder and decoder from our framework. As shown in Table R.1 in the author rebuttal, removing the action encoder and decoder decreases performance, supporting their effectiveness. > **W1.** The proposed method assumes the number of joints is equal to the action dimension. We clarify that our method can address the cases where the number of joints is not equal to the action dimension. As described in Section 3.1 (line 118-119), we assign zero values for free (non-actuable or passive) joints, and simply discard their tokens after encoding via the state encoder. This is because the states of passive joints provide useful information about the morphology of the embodiment during the encoding process, but there are no actions to predict for these joints. Note that most of the embodiments used in our experiment (e.g., hopper, walker, cheetah, acrobot, cartpole, wolf) contains passive joints. > **W3.** Limited real-world validation. We clarify that our experiments were conducted exclusively within the DeepMind Control Suite (DMC) because, to our knowledge, it is the only benchmark currently available that includes both diverse embodiments and tasks. To address concerns about the transferability of our approach to real-world robotic systems, we have extended our experiments to simulate more realistic conditions. Specifically, we introduced varying levels of noise to the control actions within the DMC environments, thereby making the transition dynamics both stochastic and noisy. As illustrated in Figure R.5 of the author rebuttal, the performance of our Meta-Controller does not significantly drops despite the presence of random noise. This result underscores the potential of our method to operate effectively under realistic conditions that involve complex dynamics and noise. > **W4.** Limited failure case discussion. To address the reviewer’s concerns, we conducted additional analyses on the failure cases of the Meta-Controller. We present visualizations of scenarios where the Meta-Controller performs less effectively and cumulative rewards over time for each scenario in Figures R.3 and R.4 of the author rebuttal, respectively. In these failure cases, we observed that agents struggle to obtain rewards until they reach a specific posture. Once they achieve this posture (highlighted by the red box in Figure R.3), they begin to solve the task effectively. This pattern is also reflected in Figure R.4, where the rewards remain near zero until a certain timestep, after which they rise consistently. This result implies that encouraging the agent to reach states similar to those in the demonstrations (e.g., via exploration strategies) would improve performance in challenging few-shot scenarios. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response. I maintain my positive rating.
Summary: This paper introduces Meta-Controller, a few-shot behavior cloning framework designed for adaptation to various embodiment and control tasks. The framework includes a transformer-based structure-motion state encoder that captures knowledge across different embodiments, and a matching-based policy network that generates adaptive policies. Experimental results demonstrate that the proposed method surpasses other baseline methods, and ablation studies confirm the effectiveness of each module within the framework. Strengths: 1、The structure encoder captures knowledge across different embodiments, enhancing the model's generalization ability across various embodiment types. Additionally, the motion encoder enables the model to comprehend the temporal dynamics of states, which simultaneously improves performance in controlling joints and achieving goals. 2、The ablation study effectively demonstrates the contribution of each module within the framework. 3、The architecture of each module is clearly illustrated and easy to understand. 4、The appendix provides extensive details on training and evaluation, facilitating the reproduction of the experiment. Weaknesses: 1、The paper uses only four different embodiments (12 in total) to demonstrate generalization ability, which is a relatively small sample size. 2、The proposed method operates only in a 2D coordinate space, limiting its application potential in realistic 3D environments. 3、The multiple transformer-based architecture constrains the possibility of real-time inference. Technical Quality: 2 Clarity: 3 Questions for Authors: 1、The ablation study in Table 2 is incomplete. Results should be included for scenarios where only the fs module is removed and where all three modules are removed. 2、Since the author only provides results based on 4 different embodiment types. It raises the question of whether the policy would still be effective with variations in embodiment configurations, such as random changes in joint length within the same embodiment. 3、Since the policy is learned from a fixed trajectory dataset, its sensitivity to noise perturbations during transitions is unclear. A robustness analysis would further validate the generalization ability of the proposed method. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The author thoroughly addresses the paper's limitations, which is commendable. However, some issues remain unresolved, such as the efficiency problems caused by the transformer-based architecture. This inefficiency contradicts the motivation of enabling adaptation to various tasks and embodiments in the real world. Similarly, the constraint of operating in a 2D space limits the method's practical applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1.** The ablation study in Table 2 is incomplete. Table R.1 of the author rebuttal completes the ablation study as requested (see row 1 and 2). Consistent with the discussion in Section 5.3 of the main text, we observe that removing the structure encoder $f_s$​ significantly decreases performance on unseen embodiments and tasks. This result indicates that the structure encoder captures transferable and modular knowledge about various morphologies. Additionally, removing all three modules yields a fully supervised model that does not involve meta-learning. As expected, this model fails to adapt to few demonstrations and suffers from overfitting. > **Q2 & W1.** It raises the question of whether the policy would still be effective with variations in embodiment configurations. We conducted experiments on six additional embodiments by manually changing joint lengths (e.g., foot length of hopper) or ratios among joints (e.g., calf-thigh ratio of hopper, front leg-back leg ratio of wolf). These modifications can make the tasks harder, as the original embodiments are optimized for specific tasks. In Table R.2 of the author rebuttal, we compare Meta-Controller with the high-performing baselines from Table 1 of the main text. Our model consistently outperforms all baselines in these challenging variants, demonstrating the robustness and adaptability of our approach to variations in embodiment configurations. > **Q3.** A robustness analysis would further validate the generalization ability of the proposed method. We conducted an additional experiment by introducing noise to the transition dynamics. Random noise sampled from $\mathcal{U}[-n, n]$ was added to the agent's action at each timestep, with three noise levels $n \in$ $[2$%, $5$%, $10$%$]$ of the action range. Figure R.5 in the author rebuttal plots the rewards of our model at each noise level compared to experts. The results indicate that our method maintains its performance across many tasks as noise levels increase. This shows our model’s robustness under stochastic environments. Interestingly, for tasks such as reacher-four, the performance increases with higher noise levels, likely due to the exploration effect induced by stochastic transitions. > **W2.** The proposed method operates only in a 2D coordinate space, limiting its application potential in realistic 3D environments. We clarify that our Meta-Controller, including the structure-motion state encoder and matching-based policy network, is designed to be applicable to any coordinate space. In principle, we can extend the input embedding layer of the state encoder to accommodate 3D coordinate inputs. The primary challenge in demonstrating our method in 3D environments is the limited availability of diverse embodiments and tasks in existing reinforcement learning datasets. Since our goal is to achieve simultaneous generalization to unseen embodiments and tasks, we necessiate a meta-training dataset with diverse embodiments and tasks. This leads us to adopt the DMC dataset, which, to the best of our knowledge, the only dataset that meets the condition (other datasets are composed of either limited embodiments [1], limited tasks [2], or both) and is also widely used in the literature. Although DMC includes some 3D embodiments, their diversity is too limited to cover joint compositions in 3D coordinate space (only 4 unique 3D embodiments). To demonstrate our method's scalability with diverse embodiments and tasks, we conducted additional experiments varying the size of the meta-training data. As shown in Table R.4 of the author rebuttal, our method's performance improves in general when adding more diverse embodiments and tasks in the meta-training dataset. This suggests the potential of our method in more realistic environments as more diverse datasets become available, which will be an important step forward. While we have not conducted direct experiments in 3D environments, we believe our key findings in 2D embodiments—namely, the generalization capability to unseen embodiments and tasks—will extend to 3D embodiments as well. > **W3.** The multiple transformer-based architecture constrains the possibility of real-time inference. We clarify that our primary focus is not on real-time inference but rather on presenting a fundamental framework for achieving robust few-shot learning across various unseen embodiments and tasks, which, to our best knowledge, has not been addressed in previous work. Recent works often adopt transformer-based architectures much larger than ours, such as large language models (LLMs) [3, 5] and vision transformers (ViTs) [4, 5]. In Table R.5 of the author rebuttal, we compare the inference time of our model with VC-1 [4], which uses a ViT-L backbone. Our model achieves faster inference times than VC-1. Although we couldn't evaluate RT-2 [5] due to lack of official code, it involves a large network with at least a 40-layer ViT and a LLM with 3B parameters, requiring much higher computation costs. However, these works have demonstrated the feasibility of using transformer architectures in real-time applications. For example, RT-2 achieved real-time inference with their heavy network using cloud computing. Thus employing multiple transformer-based architecture would not be an essential limitation for real-time inference. In resource-constrained robotic platforms, transformer-based architectures may hinder real-time inference, as discussed in Appendix A. However, advancements in optimizing transformers, such as sparse attention mechanisms, model pruning, and knowledge distillation, can enhance inference speeds. Since these approaches are orthogonal to our contribution, our method can naturally incorporate them for real-time applications with minimal requirements of the computing resource, although this is not the main focus of this paper. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. However, I believe that **the capability of real-time inference is the prerequisite to achieving the embodied generalization for continuous control**. The large model can be deployed on cloud computing for high-level reasoning but is not applicable for continuous motion control, considering the latency in communication and the requirements of high-frequency control. So I will maintain my original rating unless the authors successfully demonstrate a feasible solution to deploy the model in real-world robotic platforms. --- Rebuttal 2: Comment: We respectfully disagree with the reviewer. There have been extensive community efforts in building foundation models for continuous control [11, 12, 13, 14], given their significant potential to create versatile robotic agents that can be utilized in various real-world applications. Since generalization capability inherently stems from the scale of the model and training data, most (if not all) approaches are built upon large-scale Transformer backbones. Although these models may not be advantageous for real-time operation compared to lightweight models, we believe that the value of these efforts should not be underestimated simply due to computational cost, especially when considering their capabilities and significance for general-purpose robots. Additionally, there have always been parallel efforts in the machine learning community to develop more capable models and to accelerate their computation. As discussed in our rebuttal, there is an extensive body of work aimed at reducing the computational demands of Transformers, including techniques such as quantization[15], pruning[16], distillation[17], and linear attention[18], as well as methods to enhance hardware utilization, such as FlashAttention[19, 20, 21]. These approaches are orthogonal to our method and generally applicable to any Transformer backbone. Moreover, considering ongoing developments in hardware that rapidly reduce the cost of computation and enhance the applicability of large-scale models, we believe it is important to refrain from judging the real-time applicability based solely on current computational limitations. [11] Janner et al., “Offline Reinforcement Learning as One Big Sequence Modeling Problem”, NeurIPS, 2021 [12] Sun et al., “SMART: SELF-SUPERVISED MULTI-TASK PRETRAINING WITH CONTROL TRANSFORMERS”, ICLR, 2023 [13] Chen et al., “Decision Transformer: Reinforcement Learning via Sequence Modeling”, NeurIPS, 2021 [14] Liu et al., “Masked Autoencoding for Scalable and Generalizable Decision Making”, NeurIPS, 2022 [15] Liu et al., “Post-Training Quantization for Vision Transformer”, NeurIPS, 2021 [16] Kim, Sehoon, et al. "Learned token pruning for transformers." Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022. [17] Sanh, V. "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter." Proceedings of Thirty-third Conference on Neural Information Processing Systems (NIPS2019). 2019. [18] Choromanski, Krzysztof Marcin, et al. "Rethinking Attention with Performers." International Conference on Learning Representations. 2020. [19] Dao, Tri, et al. "Flashattention: Fast and memory-efficient exact attention with io-awareness." Advances in Neural Information Processing Systems 35 (2022): 16344-16359. [20] Dao, Tri. "FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning." The Twelfth International Conference on Learning Representations. [21] Shah, Jay, et al. "Flashattention-3: Fast and accurate attention with asynchrony and low-precision." arXiv preprint arXiv:2407.08608 (2024). --- Rebuttal 3: Comment: Your response shows your unprofessionalism and lack of seriousness. - The third and fourth paragraphs in the response appear to be **rephrasings** of the first two paragraphs. - The mentioned works [11, 12, 13, 14] primarily aim to address general decision-making problems. In contrast, the focus of this submission is on the generalization of robotic control systems across different embodiments and tasks. The starting point of different problems implies distinct requirements. While solving general decision-making problems may prioritize model effectiveness on wide-range tasks over computational cost and real-time performance, the same leniency cannot be applied to robotic control systems. Ignoring the real-time demands and limited computational resources during system deployment is unprofessional for real-world applications that require precision and reliability. Related works[a, b, c, d] focusing on the similar problem settings have verified the transferability of the model on real robots. Why can't your work do it? - For robotic control systems, the need for real-time performance in real-world robots is paramount. The paper's oversight in addressing the computational requirements and real-time capabilities necessary for practical deployment is a significant shortcoming. This is especially critical given the diverse hardware systems that entail varying control frequencies and dynamic responses, which should be integral to model design considerations. The techniques[15~21] are potential solutions to reduce the computational demands of Transformers but have not been verified on similar embodied control tasks yet. In summary, the methodology presented in this paper appears to be under-considered and lacks depth in addressing the complexities of the problem at hand. **It does not seem to have delved into the nuances of robotic control systems and their requirements for continuous control.** Given the above concerns, I believe that the paper does not meet the rigorous standards required for publication in NeurIPS. It is essential for robot learning research to not only present new ideas but also to thoroughly consider and address the practical implications and challenges associated with real-world applications. Ref: [a] Mirage: Cross-Embodiment Zero-Shot Policy Transfer with Cross-Painting [b] Cross-Embodiment Robot Manipulation Skill Transfer using Latent Space Alignment [c] Polybot: Training One Policy Across Robots While Embracing Variability [d] RoboDuet: A Framework Affording Mobile-Manipulation and Cross-Embodiment
Rebuttal 1: Rebuttal: We appreciate all the valuable comments provided by the reviewers. We will incorporate the additional results and clarifications made during the rebuttal into the camera-ready version of our paper. We want to clarify that there was a typo regarding the details of embodiment and task. We use 30 tasks from 10 embodiments as training tasks, which is originally written as 28 tasks from 9 embodiments. The missing tasks are catch and spin from the “ball-in-cup” embodiment. We will update the main text and Appendix to fix this typo. In the individual responses below, we address each reviewer's questions (Q1, Q2, ...) and weaknesses (W1, W2, ...) sequentially. For the tables and figures referenced in the rebuttal (numbered R.x), please refer to the attached PDF file. We summairze all references used in this rebuttal below. ## References [1] Yu et al., "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning", CoRL, 2020. [2] Furuta et al., "A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation", ICLR, 2022. [3] Jiang et al., “VIMA: General Robot Manipulation with Multimodal Prompt”, ICML, 2023 [4] Majumdar et al., “Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?”, NeurIPS, 2023 [5] Brohan et al., "Rt-2: Vision-language-action models transfer web knowledge to robotic control" arXiv, 2023. [6] Bertasius et al., “Is Space-Time Attention All You Need for Video Understanding?”, ICML, 2021. [7] Chen et al. "Decision transformer: Reinforcement learning via sequence modeling." NeurIPS, 2021. Pdf: /pdf/7de632ba58d87effa46e46fb15eb75430dcac4ed.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Meta 3D AssetGen: Text-to-Mesh Generation with High-Quality Geometry, Texture, and PBR Materials
Accept (poster)
Summary: Emu3D is a novel approach in text-conditioned 3D generation, producing high-quality 3D meshes and materials using Physically-Based Rendering (PBR). It uses a two-stage process: first generating images from standard viewpoints, then reconstructing the 3D shape and appearance. This approach is faster and more reliable than earlier methods and creates realistic 3D graphics by modeling light interaction with surfaces. Emu3D also improves mesh generation by predicting a signed-distance field (SDF) for better quality and refining textures for greater detail. It surpasses existing methods in both image-to-3D and text-to-3D tasks, offering superior visual quality and text alignment. Strengths: The quality of the generated geometry, texture as well as the PBR materials offers an efficient and high quality generation workflow. The paper is well-presented, with sufficient experiments to validate the quality of the results. Weaknesses: - The generated PBR material includes only limited components: albedo, metallicity, and roughness. PBR materials typically also consist of other components, such as index of reflection, scattering, coat, sheen etc. Therefore, the generated material can only represent surface reflectance from diffuse to specular. Technical Quality: 3 Clarity: 4 Questions for Authors: - What’s the performance of the proposed method over anisotropy materials? - What's the performance of the extracted mesh over transparent object like a class bottle? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: - The generated material has significant representational limitations, as it primarily models surface reflection using only albedo, metalness, and roughness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review and the feeback! 1. **Lack of support for material components such as index of reflection, scattering, coat, sheen etc.**: Our model indeed only tackles essential PBR parameters (albedo, roughness and metallness) as these are the most important ones, and also the focus of 3D artists when they work on real-time applications like video games. These real-time applications are also the most likely to benefit from AI-generated content as they require a large quantity of 3D assets and, depending on the game, the quality of these assets need not to be as high as, say, a CG for a Hollywood production. Additionally, simple PBR textured assets can serve as a starting point for artists to extend the material map, making it a good place to start from. Note that in the field of generative 3D asset creation, to our knowledge, there are no methods that go beyond the simple PBR model (albedo, metallness, roughness). Our approach can in principle be extended to predict additional parameters (e.g., say the full Disney BRDF), given sufficiently representative training data. We’ll add a note in the final version to clarify these limitations. 2. **Performance on anisotropic and glass like materials.** We also do not tackle transparent materials yet. Our approach could also be extended to those in principle, again given sufficient training data, but less trivially as it would require modifying the deferred shading part to support transparency. --- Rebuttal Comment 1.1: Comment: With the reviewer-author discussion period nearing its end, we want to ensure that our responses have thoroughly addressed your primary concerns. We recognize the importance of this dialogue in resolving the issues raised by the reviewers and are committed to providing any further clarifications you might need. Please let us know if there are any additional questions or if more explanation is required. Warm regards, Emu3D Authors
Summary: The paper adopts a two-stage PBR information generation method. In the first stage, the text-to-image model is used to predict the PBR channel. In the second stage, it uses SDF to reconstruct the geometry and PBR information. Then, a cross-view attention texture reflector network is used to improve the coarse UV of the input. Strengths: Methods used are quite effective. The writing is relatively easy to understand. The experiments are comprehensive. The performance is indeed quite good. Weaknesses: The technical novelty is weak. I noticed that neither the dataset nor the code are intended to be open sourced. I understand that this technology involves commercial use, but I believe that at least all the test sets used in the experiments and the 3D (PBR) assets generated on the test sets should be open sourced to facilitate comparison in future work. The PBR resolution of 512x512 with 2x2 grid for 4-view is still relative low. Technical Quality: 4 Clarity: 4 Questions for Authors: Recently, Unique3d [1] generated mesh with higher-resolution texture. As Emu3d and Unique3d [1] are concurrent works, it is not necessary to directly compare with it. However, theoretically speaking, compared to Unique3d, the texture resolution of Emu3d is still relatively low. It is suggested to add a paragraph in the paper to discuss the advantages and disadvantages of the two methods relative to each other. [1] Wu, Kailu, et al. "Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image." arXiv preprint arXiv:2405.20343 (2024). Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: No. But they have discussed in the limitation part. They use a triplane representation, which cannot represent large-scale scenes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and the helpful comments! 1. **Technical Novelty**: To our knowledge, Emu3D is one of the first academic work that performs text-to-3D with PBR in a feed-forward way. We introduce several novel aspects that come together to achieve high quality and efficient text-to-3D reconstruction with PBR materials. These include: (i) A novel albedo + shaded RGB grid generation paradigm, with the intuition that the combination of these two modalities can help the reconstruction stage in inferring materials (ablated in Table 1, Fig. 3 and Video). (ii) A novel deferred shading loss that regularizes the predicted materials in an efficient manner (ablated in Table 1, A.6.3, Fig. 12 and Video). (iii) A Volumetric SDF renderer that extends Lightplane with Triton kernels for supporting large batch sizes, image resolution, and denser point sampling on rays. Additionally, we enable a direct loss on the SDF values (A.6.4), also implemented as Triton kernels for scalability (ablated in Table 3, Fig. 4 Row 4 vs 5 and Video). (iv) A novel texture refinement module that uses cross-view attention layers, which greatly improves the fidelity of reconstructed textures by operating in UV space (ablated in A.7 and Fig. 14, Fig. 4 Row 5 vs Row 6, Table 1 and Table 3, and Video). 2. **Facilitating Comparisons**: This is a great point and we are in discussion to open the data as much as possible. We also plan to release the PBR assets generated on the dreamfusion prompts, which are a typical benchmark for comparing text to 3D models. Note, however, that we do use publicly available GSO dataset for sparse-view image to 3D reconstruction. The reconstructed assets using our method on GSO will also be released. 3. **PBR resolution**: The resolution of our 2x2 grid is *not* 512x512, it is 512x512 per image in the grid, making the grid resolution to be 1024x1024 (albedo and shaded, each 1024 x 1024 x 3). These images are input to EmuLRM, which produces UV textures with resolution 512x512 each for albedo, metalness and roughness. These are further upscaled to 1024x1024 resolution textures by the texture refiner. 4. **Comparison to Unique3D**: Unique3D upscales the 4 view grid to a resolution of 2048x2048, compared to our 512x512 per image resolution, which can potentially impart higher detailed textures. However, Unique3D only predicts RGB colors, not texture maps, and lacks material outputs as well. This prohibits asset use in novel environments due to baked in lighting in the albedo. Further, we show a qualitative comparison to more recent methods like Unique3D as well as Stable 3D which came out last week in the common response PDF. The quality of assets produced by our method is visually more appealing than both. We will add this discussion and the comparisons to the final draft. --- Rebuttal Comment 1.1: Comment: With the reviewer-author discussion period nearing its end, we want to ensure that our responses have thoroughly addressed your primary concerns. We recognize the importance of this dialogue in resolving the issues raised by the reviewers and are committed to providing any further clarifications you might need. Please let us know if there are any additional questions or if more explanation is required. Warm regards, Emu3D Authors --- Rebuttal Comment 1.2: Comment: Thank the authors for the detailed rebuttal and the additional results provided. It’s surprising to see that the texture refinement is effective, even with UV texture maps that contain numerous isolated pieces. Regarding the third point, I meant to ask about optimizing with respect to the renderings of volSDF, rather than the input images. Nevertheless, I agree with the authors’ point regarding the time cost. The rebuttal has addressed my major concerns, and I have adjusted my scores accordingly.
Summary: This paper introduces a two-stage 3D asset generation pipeline that outputs high-quality 3D meshes with PBR materials in approximately 30 seconds. The technical novelties include: - In text-to-image stage, generating multiple views of both shaded and unshaded images, and predicting PBR materials during the image-to-3D stage. - Using SDF instead of opacity field which yields higher-quality meshes. - Introducing a new texture refiner designed to recover and enhance details in texture maps. Strengths: The proposed pipeline introduces several components to enhance and extend the existing pipeline [39]. Most design choices are intuitive and well-justified through ablation studies, which include both qualitative and quantitative analyses. Additionally, the paper compares the proposed pipeline with SOTA methods, demonstrating more favorable results through a user study. Weaknesses: I found certain technical novelties challenging to assess: - The use of SDF instead of opacity fields in the reconstruction pipeline is not a novel idea but more of an extension to Instant3D. This approach has also been explored in 3D generation works such as MeshLRM and Latte3D. - The texture refiner, which I found the most interesting and novel part of the pipeline, is difficult to evaluate from the provided qualitative examples. Looking at the texture maps in Figures 2 and 13, which contain many small components and seams from xAtlas outputs, one concern is whether these artifacts might affect the UNet's predictions in UV space, e.g. causing color inconsistencies. The ablation in Figure 14 only shows textures improved in flat regions, without the corresponding UV maps. Showing texture maps before and after refinement around seams would help readers better understand the model's effectiveness, and justify refinement in UV space as opposed to image space. Technical Quality: 3 Clarity: 4 Questions for Authors: Regarding the blurriness of the pre-refinement texture map: Is this a result of the image-to-3D reconstruction process or an issue with sampling from the texture field? If it is due to the sampling process, why not optimizing with respect to volSDF renderings to recover the details? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors discuss the limitations of their method and broader impact in the paper. Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review and the feedback on our work! 1. **Use of SDF instead of opacity fields:** We do not claim that using SDF in 3D reconstruction or generation is novel per se (e.g., StyleSDF, SDFusion, GET3D, Latte3D, etc. uses it). We do note that we are among the first to use SDF in LRMs specifically. Other works that do the same include MeshLRM, InstantMesh, Large Tensorial Model (LDM) and Direct3D, but these are concurrent (released either after or just before the submission deadline). What makes our approach stand out, even among these more recent papers, is the fact that we use the SDF formulation directly, in a single “phase”. Other papers usually require a first stage training with opacity fields, followed by a second stage where the opacity field is replaced with SDFs / meshes. In other words, we show that two stages are unnecessary. Furthermore, we do not simply exchange NeRF opacity fields with a VolSDF formulation. Instead, we implement the SDF renderer by extending the open source Lightplane Triton kernels supporting large batch sizes, image resolution, and denser point sampling on rays. Additionally, we enable a direct loss on the SDF values (A.6.4), also implemented as Triton kernels for scalability (note that a naive pytorch implementation of such a loss only allows a batch size of 1 image on an a100 GPU at our base resolution due to increase memory requirements). While these are “engineering” contributions, they are key to making this a practical method. Note that we compare against the concurrent works MeshLRM and InstantMesh, which were released before our submission. 2. **Texture refinement evaluation, before and after refinement outputs:** Our texture refiner is evaluated extensively both quantitatively and qualitatively: (i) *Table 1*: Qualitative evaluation of 4-view PBR reconstruction on the internal dataset. The effectiveness of the texture refiner is evident from improvements in LPIPS and PBR PSNR from config F to config G, where the only change is the addition of the texture refiner. Further improvement from config H to I, again solely due to the texture refiner, underscores its efficacy. (ii) *Table 3*: Qualitative evaluation of 4-view non-PBR reconstruction on the GSO dataset. The significant improvement in texture quality, particularly in PSNR and LPIPS, between config C and D, demonstrates the impact of adding the texture refiner. (iii) *Fig. 4, row 5 vs. row 6 (ours without tex-refiner and ours with tex-refiner)*: Qualitative evaluation shows the effectiveness of the texture refiner, with clear improvements in texture fidelity. (iv) *Fig. 14*: Qualitative evaluation further demonstrating the impact of texture refinement. (v) *Supplementary Video*: 360 degree view texture refinement ablation for select assets. For before and after refinement visualizations of uv texture maps, please check the common PDF doc. We will further add UV space improvements to the final draft for better understanding as suggested. 3. **Blurriness in Image-to-3D and why not optimization:** We suspect that the degradation in textures is due to the volumetric 3D representation of colors. Since texture is inherently a surface property, extending it into 3D space beyond the surface is inefficient, leading to blurriness. For instance, this degradation occurs in all volumetric LRM-based methods like Instant3D, Instant Mesh, TripoSR, and NeRF stage of MeshLRM, SparseNeuS, etc. In contrast, approaches like GRM or the mesh stage of MeshLRM, which use pixel-aligned Gaussians and rasterization-based rendering respectively, capture textures much better due to their near-surface or surface representation. Similarly, our texture refinement uses UV representation with rasterization for rendering textures, ensuring textures exist only on the surface. While optimizing with respect to input images is a viable approach, it has issues: (a) Since the entire shape might not be visible in the input views, seams can appear between optimized visible and unoptimized non-visible areas, leading to incoherence and contrast mismatch. (b) Reconstructed objects are not perfect, leading to projection mismatches between the rendered images of the reconstructed object and the input images. This mismatch can provide incorrect supervision to the optimization objective, as the projections of imperfect 3D reconstructions may not align with the input images. (c) Time cost: Optimization takes a longer time. A learned approach like our texture refiner can combat both (a) and (b) since the training process learns to overcome these issues in a data-driven manner and can do so with a single feedforward pass, addressing the time cost (c). --- Rebuttal Comment 1.1: Comment: With the reviewer-author discussion period nearing its end, we want to ensure that our responses have thoroughly addressed your primary concerns. We recognize the importance of this dialogue in resolving the issues raised by the reviewers and are committed to providing any further clarifications you might need. Please let us know if there are any additional questions or if more explanation is required. Warm regards, Emu3D Authors
Summary: The paper mainly works on text-to-3D with PBR materials. It follows the diffusion-based multiview generation and LRM-based reconstruction paradigm. It is a fast feed-forward solution for PBR generation: The diffusion model predicts both shaded and albedo, and the LRM predicts PBR via differentiable rendering with an efficient pixel-wise deferred shading loss. Moreover, it designs a UNet-based texture refiner to project input images to mesh, bringing sharp textures. Also, it adapts LightplaneLRM with the VolSDF renderer and SDF loss to enhance geometry. The generation only takes 30 seconds. It outperforms state-of-the-art feed-forward baselines and performs comparably to top optimization-based industry solutions. Strengths: 1. Good performance. a. It compares with and outperforms some very recent SOTA approaches like InstantMesh and MeshLRM as well as some commercial products like Meshy and Luma Genie. b. It showcases that it can predict well on objects with mixed materials, which is very impressive. Its PBR prediction is SOTA performance. Also, it is not optimization-based and thus fast. c. Its texture refinement module is robust and preserves many details (Figure 4, 8), which helps win out in terms of visual appearance. 2. It conducts many quantitative and qualitative ablation studies to validate each component’s effectiveness. 3. The writing is very clear and easy to follow. Notations are used properly and are well explained. Each design choice is well-motivated and elaborated. It also includes a nice preliminary reading of the used BRDF model. Weaknesses: 1. Image-to-3D may have a degraded material decomposition. Since the proposed approach does not predict albedo for input images, PBR prediction correctness will drop significantly as pointed out in the ablation study (Table 1, Fig. 3). 2. The diffusion model is inherently stochastic, which leads to variance in albedo prediction. How is this variance? How will this variance impact the material decomposition afterward? This point is not quantitatively evaluated. 3. The paper evaluates PSNR for metallic. It is great. But metallic is usually binary. A simple 2x2 confusion matrix on metallic prediction on single-material objects can show the performance more intuitively. 4. If input images are taken under non-uniform lighting conditions, I doubt if albedo can still be correctly decomposed. 5. Lack of failure case analysis. Following the previous point, I wonder if there are some typical failure cases for PBR prediction. Technical Quality: 4 Clarity: 4 Questions for Authors: - Fewer than 10% meshes are selected as high-quality samples to finetune Emu (L226). What is the filtering standard? In terms of appearance quality, material correctness, and composition complexity? Please elaborate more. - L221: What dataset is used? Is it publicly available, like Objaverse? Is the dataset used under proper license? - How is the text-to-image model (3-channel output) adapted to output 6 channels? Please provide more details on that. - For results in Figure 7., if I understand correctly, they are deterministic? What is their PBR PSNR? (How cherry-picked are they?) - May consider adding some qualitative comparisons to optimization-based PBR generation baselines like Richdreamer. Although they are time-costly and may not be as competitive as commercial solutions, such a comparison can strengthen the claim. - In the Image-to-3D part (excluding texture refinement), Are albedo predictions only used as inputs to MLP? - Some qualitative examples of Luma Genie 1.0, stage 2 vs the proposed approach (A.5). It would help to show where optimization helps to get an edge. - 30-sec Runtime breakdown to each step. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Some suggestions are included in the Questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for detailed review and the helpful comments! 1. **Degraded PBR quality with only RGB shaded inputs** (weakness #1): Yes, indeed the PBR quality suffers when only RGB shaded inputs are provided (upper half Table 1) since the material decomposition is ambiguous. For this reason, the complete proposed approach for text-to-3D partially mitigates this problem by generating both the shaded and the albedo grid, which enhances the PBR quality (Fig. 4). 2. **Evaluation of variance in albedo and shaded RGB prediction** (weakness #2): Yes, for a given text prompt, the albedo and shaded RGB produced by the Emu model can vary due to the stochastic nature of diffusion. However, evaluating the effects quantitatively is challenging in the absence of the absolute ground truth. For evaluating the quality of outputs in the text-to-3D setup, we conducted a user study for user preferences using the fixed model seed, but evaluating the quality of material might be a bit more challenging for a non-professional user study participant. 3. **Confusion matrix for metalness** (weakness #3): It is true that, physically, materials are either metallic or dielectric, but computer graphics artists often use non-binary values of metalness, for example to represent materials that are a mixture of dielectrics and metals (e.g., certain rock conglomerates, transitions between oxidized/rusted and polished metal); in our own dataset, a non-trivial proportion of metallic assets have non-binary metalness. This justifies using PSNR or similar regression metric for metalness too. 4. **Non uniform lighting** (weakness #4): If the input views are captured with non-uniform lighting, it will indeed hamper PBR reconstruction performance. However, in our application, i.e., text-to-3D, lighting is controlled by fine-tuning the Emu grid generator on 3D models rendered with consistent lighting.This makes it biased toward generating images with consistent across views and with uniform lighting. 5. **Failure cases** (weakness #5): We will add a discussion and visualizations corresponding to failure cases in the final draft. --- ### **Answers to questions** 1. **Filtering standard for the 10K objects used for fine tuning Emu:** To filter the dataset to obtain the top 10K objects, we first remove those with too low aesthetic score. We then select the top 10K objects with the highest CLIP score (dot product between CLIP of the object render and CLIP of the caption). This number was empirically determined, as including more assets, i.e. including assets with a lower CLIP score, results in decreased text-image alignment. To control material quality, we only select assets with valid metallicness and roughness maps. Compositional quality is generally low, as a vast majority of assets in the dataset are single objects rather than scenes. 2. **Details about the dataset:** We use an internal proprietary dataset which is similar to Objaverse. The usage has been scrutinized by the professional legal counsel, which raised no concerns about copyrights. Copyright issues are why we did not use Objaverse. 3. **Adapting 3-channel Emu with 6-channel outputs:** We employ a text-to-image diffusion model pre-trained on billions of images annotated with text and expand its input and output channels by a factor of 2 to support simultaneous generation of the shaded appearance and albedo (Section A.6.1). This requires only the input and output layers to be adapted. 4. **Figure 7 results:** Yes, correct, the results are deterministic. Here, the task is sparse view reconstruction from 4 posed RGB images (without the albedo input), resulting in 3D meshes and their PBR materials. The results correspond to config G in Table 1, which shows the mean PBR PSNR on the whole test set. For the 6 images shown in the figure, the mean PSNR is 23.49 for albedo, 24.91 for metallicity, and 21.02 for roughness, with the standard deviation of (2.72, 3.86, 4.17) respectively. The object selection for the figure was intended to target objects with a non-uniform metalness or roughness (not just a single value) to make for an interesting visualization and highlight the model’s effectiveness. 5. **Comparison to Richdreamer:** Please check the PDF in the common response. 6. **How are albedo predictions used by the LRM:** In the vanilla Instant3D LRM, an image encoder encodes patchwise feature tokens of the image using a pre-trained DINO network (Caron et al. 2021). LightplaneLRM further extends this by adding Splatter layers on the DINO outputs in the subsequent image-to-triplane transformer. For accommodating additional albedo predictions, we add the encoded albedo patchwise features to the DINO network. This enables access to albedo as well as shaded image information to DiNO, and therefore subsequently to the image-to-triplane transformer and the splatting layers. We will add these details to the final draft. 7. **Qualitative Luma Genie 2.0 results:** Please check the PDF in the common response. 8. **Runtime breakdown:** Emu image generation is 20s, LRM reconstruction is 2s, meshification is 8s, and texture refinement < 1s. --- Rebuttal Comment 1.1: Comment: With the reviewer-author discussion period nearing its end, we want to ensure that our responses have thoroughly addressed your primary concerns. We recognize the importance of this dialogue in resolving the issues raised by the reviewers and are committed to providing any further clarifications you might need. Please let us know if there are any additional questions or if more explanation is required. Warm regards, Emu3D Authors
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive and valuable feedback. We are pleased that all four reviewers found our presentation to be excellent, the soundness of our proposed approach to be good or excellent, and our contributions to be good. We are glad the reviewers found our method to be “effective” (**cPQZ**), producing “efficient, high-quality results” (**Xevg**), and “favorable” (**Zgd2**), with “good performance” (**z8qF**, **cPQZ**). All reviewers appreciate the experiments, noting that the evaluation is “comprehensive“ (**cPQZ**) and “sufficient” (**Xevg**) with “many qualitative and quantitative studies” (**z8qF**, **Zgd2**), and “thorough ablations” (**Zgd2**) that “validate each component's effectiveness” (**z8qF**). The reviewers also found our design choices to be “well-motivated” (**z8qF**), “intuitive, and well-justified” (**Zgd2**). They further appreciated the writing quality as clear and easy to follow (**z8qF**, **cPQZ**) and well presented (**Xevg**). Attached is a PDF with requested extra comparisons and visualizations, which will be included in the final draft. Due to limited space, we encourage the reviewers to zoom into the figures if necessary. Pdf: /pdf/b955d77480182dddfad0d041daca54e0b70405b3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Accept (poster)
Summary: To resolve the costly memory consumption of KV cache in transformer-based LLMs, this paper proposes Cross-Layer Attention (CLA). In contrast to multi-query attention (MQA) and grouped-query attention (GQA) which share KV cache across attention heads, CLA shares KV cache across contiguous layers. Pre-training experiments prove the effectiveness of CLA over MQA and GQA. Strengths: 1. The experiments are comprehensive and the design choices are clearly documented. 2. The problem studied is an important one and the approach is novel. Weaknesses: 1. Experimental results are not presented fully. Figure 3 only shows the validation perplexity for a subset of models, and Table 3 and 4 only presents the accuracy for a subset of models. I couldn't find the results elsewhere in the paper or the appendix. Presenting the results fully contributes to the holistic understanding of the effectiveness of the proposed approach. 2. Suggestions to experimental designs. The authors choose to pre-train 1B and 3B models from scratch to prove the effectiveness of CLA. It would perhaps be more convincing if the authors can choose a fully open source pre-trained model (such as OpenLLaMA) as baseline, and pre-train it with CLA to prove its effectiveness. Pre-training 1B and 3B models using a custom set of hyper-parameters and datasets is less convincing in comparison. 3. Training loss curves are missing. Training loss is important for understanding whether the models have fully converged. 4. Citation formats may be incorrect. Many references on page 10-12 only show author names, title, and year, and omit publication venue. 5. Some typos. Line 16: "which are possible" -> "which are impossible", line 18: "would otherwise be possible" -> "would otherwise be impossible". Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Can the authors please provide some insights on why cross-layer sharing of KV cache may produce better models than GQA? The proposed method would be more convincing if some design justifications are provided. 2. Is it possible to adapt pre-trained models to using CLA through fine-tuning? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes the authors addressed the limitations. I do not envision additional limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and generous feedback. We address each point below: ## Could we include more comprehensive results tables in the paper? We thank the reviewer for pointing this out, and agree the paper could be improved by including more comprehensive results tables. For validation perplexities, we note that validation perplexities for all model architectures in our design space exploration, including those not shown in Table 1 or Figure 3, are already included in Appendix B Table 5. For accuracies on downstream tasks, the reviewer is correct that our initial submission only included downstream accuracies for a subset of architectures. Although we include downstream accuracies for selected 1B-scale architectures (see Table 3) and all 3B architectures (see Table 4 and Appendix D Table 7), we do not include downstream accuracy results for all architectures in our 1B-scale design space exploration. We will correct this in the final submission, and include a table (likely in the appendix) containing the missing benchmark results for all architectures not currently documented. We have provided this full table of results in an accompanying comment on OpenReview. ## Could we compare to an open-source pre-trained model? We thank the reviewer for their suggestion. In response to the reviewer's comment, we have run an experiment comparing directly with the open-source GQA4 model TinyLlama-1.1B at its 105B-token intermediate checkpoint. In this experiment, we pretrained our own version of TinyLlama-1.1B-105B from scratch using CLA2, using otherwise-identical training data, model architecture, and training hyperparameters as described in the TinyLlama repository. In particular, we used the same cosine learning rate schedule as TinyLlama 1.1B, which decays over 3T tokens (although we only ran training to 105B tokens). In this comparison, we find that our TinyLlama-1.1B-105B checkpoint trained with CLA2 matches or exceeds the performance of the original TinyLlama-1.1B-105B checkpoint. We include the benchmark scores for each model below: | Model | $\uparrow$ hellaswag | $\uparrow$ piqa | $\uparrow$ winogrande | $\uparrow$ sciq | $\uparrow$ openbookqa | $\uparrow$ boolq | $\uparrow$ arc-e | $\downarrow$ Wikitext (ppl)| |--------------------|----------------------|-----------------|-----------------------|-----------------|-----------------------|------------------|---------------------|-----------------------| | Tiny-Llama-CLA | 0.4583 | 0.6855 | 0.5375 | 0.7730 | 0.3260 | 0.5930 | 0.4710 | 19.5853 | | Tiny-Llama-Original| 0.4354 | 0.6730 | 0.5367 | 0.7240 | 0.2980 | 0.5963 | 0.4491 | 21.3407 | At the reviewers' discretion, we would be happy to include these results in the final version of our paper. ## Could we include training loss curves? We would be happy to extend the appendix in the final version of our paper to include visualizations of the training loss curves for all the models in our experiments. Although we are not able to attach images of the loss curves in this OpenReview response, we can say that the shapes of the loss curves for all CLA and non-CLA models are qualitatively similar, and that all models have loss curves typical of transformer-based LLMs trained with cosine-decay learning rate schedules. ## Formatting and typos We appreciate the careful attention the reviewer put into our work, and will fix these errors. ## Why should we expect cross-layer sharing to help relative to using only GQA/MQA? GQA and MQA can be seen as relying on the hypothesis that in MHA models there is hidden redundancy in the content of KV activations in _different heads_ within each layer. GQA and MQA then exploit this hypothesized redundancy to reduce the size of the KV cache relative to MHA, with only minor degradation in performance. Similarly, our original motivation for investigating Cross-Layer Attention was based on the hypothesis that even with GQA/MQA, there may be remaining hidden redundancy in the content of KV activations across _different nearby layers_. If there is hidden redundancy across layers, then GQA and MQA have no mechanism for exploiting that redundancy to reduce KV cache size, no matter how we set our GQA/MQA hyperparameters -- however, CLA would be able to exploit that redundancy. We would be happy to mention this design motivation in the final version of our paper. ## What about adapting pre-trained models to use CLA? We agree that adapting (or "uptraining," as the GQA paper calls it) existing models to use CLA is an interesting avenue of research. We have conducted some preliminary experiments on CLA uptraining. We have found that it is possible to convert 1B- and 3B-scale MQA models each trained on 100B tokens into MQA-CLA2 models with 5.7% and 4.9% higher perplexity, respectively, by uptraining them with CLA for 20B tokens. Similarly to the GQA paper, we find that initializing the uptrained model by mean-pooling KV projection weights outperforms simply dropping KV projections from the model. At the reviewers' discretion, we would be happy to include these preliminary uptraining results in the final version of the paper. We also believe it is likely possible to improve further upon our uptraining scheme, and would be happy to mention in the paper that improved schemes for CLA uptraining represent a promising direction for future research. We also note that although the original GQA paper focused on uptraining, GQA has also had a significant impact on industry practice via its direct application to LLM pretraining, as seen in models like the Llama 3 series. We hope that CLA may be able to have a similar impact via direct application to pretraining. --- Rebuttal 2: Title: Full Benchmarking Results for Models in our Design Space Exploration Comment: Attached please find the full benchmarking results for the models in our 1B-scale design space exploration. (This comment accompanies our rebuttal, which could not fit the attached table due to length constraints.) |Model|$\uparrow$ hellaswag|$\uparrow$ piqa|$\uparrow$ winogrande|$\uparrow$ sciq|$\uparrow$ openbookqa|$\uparrow$ boolq|$\uparrow$ arc-e|$\downarrow$ wikitext (PPL)| |-|-|-|-|-|-|-|-|-| |H128-MHA|33.88|67.19|53.12|81.1|19|61.62|52.15|20.90| |H128-GQA4|33.82|67.79|51.62|78.6|18.6|60.73|51.47|21.38| |H128-GQA2|33.34|67.85|53.04|79.6|20|60.89|50.97|21.64| |H128-MQA|33.53|67.79|51.07|78.4|18.6|56.61|51.35|21.79| |H64-MQA|33.24|67.52|50.04|75.8|17|59.39|51.22|22.31| |H46-MQA|32.99|66.7|52.41|77.9|19.2|60.18|49.34|22.59| |H32-MQA|32.58|67.8|50.99|74.5|18.4|59.94|49.02|23.76| |H512-MQA-CLA2|33.68|67.68|52.33|77.3|18.8|55.72|52.19|22.42| |H256-MQA-CLA2|33.9|67.74|49.8|77.4|18.2|60.34|50.04|21.64| |H128-MQA-CLA2|33.29|67.63|49.88|78.3|18|59.51|49.62|21.82| |H90-MQA-CLA2|33.15|67.41|51.85|76.7|17.2|59.11|52.06|22.13| |H64-MQA-CLA2|32.71|67.36|51.7|74.9|19.4|54.68|50.88|22.43| |H256-GQA4-CLA2|33.63|66.92|51.78|78.5|18.6|60.43|51.18|21.40| |H128-GQA4-CLA2|33.64|67.74|50.59|78.1|18.6|58.78|51.09|21.66| |H128-GQA2-CLA2|33.39|67.14|52.17|77.3|19.8|59.45|51.26|21.83| |H128-MQA-CLA3|32.91|67.74|51.54|76.6|18|54.53|51.18|22.18| |H128-MQA-CLA4|32.51|67.57|51.85|75.4|18.6|59.33|51.73|22.62| |H128-MQA-CLA2-KeepEnds|33.58|68.12|52.72|76.2|19.2|60.12|51.64|21.88| |H128-MQA-CLA2-DenseFront|33.43|67.3|52.57|75.7|19.4|49.14|50.88|22.09| |H128-MQA-CLA2-DenseBack|32.71|66.65|51.7|76.5|17.4|59.69|50.51|22.8| --- Rebuttal Comment 2.1: Comment: Thank you to the authors for the detailed response. My concerns have mostly been addressed. I think the additional clarifications and experimental results are a great addition to the paper. I have adjusted my initial rating accordingly. --- Reply to Comment 2.1.1: Title: Thank You Comment: We are glad that we were able to address the reviewer's questions, and thank them for their response.
Summary: In this paper, the authors proposed an approach, Cross-Layer Attention (CLA), to accelerate the autoregressive generation procedure of Large Language Models. Going beyond multi-query attention, the main idea of CLA is to share the Key-Value caches among attention layers. Intuitively, this idea is straightforward to further reduce the size of the KV cache that must be stored during generation. The authors conducted several experiments covering different model sizes and combinations between CLA and GQA/MQA to verify the effectiveness of the CLA module. Strengths: 1. The problem this paper aims to tackle has great significance in real-world applications of LLMs. 2. Overall, the proposed CLA approach is simple and easy to implement, which is friendly for practical engineering. The paper is also easy to follow. 3. **Efficiency**: from the experimental results, the CLA module can achieve further acceleration beyond MQA/GQA approaches across different model sizes. 4. **Performance**: it is interesting to see that the cross-layer sharing of KV caches does not hurt the performance of LLMs a lot, which further serves as support evidence that the CLA approach can be applied in practice. Weaknesses: 1. **The comprehensiveness of experiments can be further improved.** - **Model sizes**: the current version of this paper only verifies the effectiveness of CLA on 1B and 3B Transformer language models. Although the experimental results demonstrate that CLA does not bring performance drop on real-world tasks for these two model sizes, it is also questionable whether modern LLMs with >10 and > 100 billion parameters could still use this approach to achieve better accuracy-efficiency. After all, there exist many approaches that are only effective on small-scale architectures in the literature. It is also understandable that the computational resources are restrictions for conducting such verification experiments, but I think it would be a great plus for improving the quality of this work. - **Design choices**: in Table 1 and Appendix B, the authors listed a bunch of Non-CLA baselines and MQA/GQA+CLA2/CLA3/CLA4 models. However, CLA-only models are not compared. It would be questionable whether CLA can only be used with MQA/GQA together. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In lines 139-141, could you explain why using separately-learnable affine layer-norm parameters for the KV projection blocks and Q blocks in attention? 2. Similar to Sec 3.2.2, how is the robustness of CLA against other hyper-parameters like batch size? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discussed limitations of this work in Sec 4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and generous feedback. We address each point below: ## Could we evaluate CLA on larger models? Unfortunately, we lack the resources to train 10B- and 100B-scale models from scratch using CLA. Even training a single model on the same parameter and data scale as Llama 1 7B would require $30\times$ the training compute of our largest 3B-scale training runs. We hope that future work will apply CLA to models at larger scale. ## Why don't we evaluate CLA in combination with MHA? We chose to spend our resource budget evaluating models combining CLA with GQA/MQA, rather than MHA, because we believe that most practitioners who would apply CLA would want to do so in combination with GQA/MQA. As we mention in Section 2.3, CLA reduces KV cache storage footprint, whereas GQA/MQA reduces both KV cache storage footprint *and* memory bandwidth requirements during decoding. Because MQA/GQA provides these dual benefits, we expect practitioners to prefer using MQA/GQA first to obtain KV cache size reductions over MHA, and to only apply CLA to obtain further reductions on top of GQA/MQA. We note that because of the efficiency benefits of GQA at inference time, many recent models, such as Llama 3 and Mixtral, use GQA rather than MHA. We chose to include the MHA model in our design space exploration just as a point of comparison, to contextualize the performance achieved by other GQA/MQA and CLA models. We would be happy to clarify this point, and the reasoning for why we did not directly combine CLA with MHA, in the final version of our paper. ## Why do we use separately-learnable affine parameters for the KV projection? We chose to use separately-learnable affine parameters in the layernorms for our KV projections because we did not want to arbitrarily privilege one of the several attention layers sharing each KV projection to also share its layernorm affine parameters, and because it was a convenient way to implement CLA within our framework. We note that using separately-learnable affine parameters increases the parameter counts of our models by less than 0.003% in all cases. Due to limited resources, we did not ablate this choice. ## Could we conduct sweeps over batch size, or other hyperparameters? We chose to use our available resources to sweep over learning rate because it is typically the hyperparameter to which final model performance is most sensitive. For our choice of batch size, Adam $\beta_1$ and $\beta_2$, and AdamW weight decay coefficient, all our experiments used the same standard values used for pretraining Llama 2. We agree that, in the absence of resource constraints, sweeping over batch size and other hyperparameters would be valuable, but we note that the optimal learning rate can depend on batch size; this means that sweeping over batch size while ensuring we were comparing models at their best learning rates would in fact require a prohibitively expensive joint sweep over both batch size and learning rate. We thank the reviewer for their suggestion, and leave more exhaustive ablations of different training hyperparameters as a possible direction for future work. --- Rebuttal 2: Comment: Thank you for your responses. Although I still think the potential of this work should be further evaluated by using models of larger sizes, I understand the limitations of computational resources. I choose to raise my rating to 7.
Summary: This paper introduces a new KV cache compression technique by sharing KV cache cross-layers called CLA. It can be integrated with most of the existing KV cache compression technique like MQA, GQA, and quantization. When applying over GQA, CLA can further reduces KV cache size by 2× while maintaining similar accuracy. Experiments with 1B- and 3B-parameter models show that CLA offers better memory/accuracy trade-offs, allowing models to handle longer sequences and larger batch sizes. Strengths: - Simple and effective idea. It introduces an orthogonal dimension to the existing KV Cache compression methods. It can easily be integrated with existing KV cache techniques, like MQA/GQA, quantization, token eviction. - The writing and presentation is easy to follow - The proposed Cross-Layer Attention (CLA) does not require custom implementation and easily be deployed for any end-devices. Weaknesses: I personally like this paper and vote for acceptance. However, I think the experimental design could be improved to make it more solid: - Only Wikitext Perplexity is reported, which is a weak indicator of LLM ability. While I understand that LLM evaluation is challenging and time-consuming, I suggest the authors also report the MMLU score, even if only on a subset. - Can CLA work under continuous pretraining scenarios like GQA? Namely, can a pretrained LLM be continuously tuned into a CLA-based model, even though the compression ratio might not be as high in this case? I ask this because pretraining LLMs is extremely expensive, as seen with models like Llama 3.1. It is risky and costly to train another open-sourced CLA model from scratch. If we can adapt existing models into CLA-based models, this paper could have a much greater impact. - One of the main motivations for KV Cache compression is long-context inference. However, long-context tasks like few-shot learning or multi-doc QA are much harder than the "short" tasks. I understand that long-context evaluation is challenging for pretrained models due to lack of alignment. Still, I think the authors could show some passkey retrieval results, as this task does not require much alignments. Technical Quality: 3 Clarity: 3 Questions for Authors: Check Weaknesses point 2 Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: No major limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and generous feedback. We address each point below: ## Could we report more quality metrics? In addition to Wikitext perplexity, we also report accuracy scores on Hellaswag, PIQA, WinoGrande, OpenBookQA, BoolQ, and ARC-E, which can be found in Table 3 and Table 4. In aggregate, we find that these metrics show that our CLA models match the accuracy of our non-CLA baselines while using only half as much KV cache memory. In response to your question, we have evaluated our 1B- and 3B-parameter models on MMLU, but find that MMLU does not provide useful signal for comparing the quality of different models at our scale of training compute. In particular, we observe that all the models which we compare in Figure 1 score no better than chance (25% accuracy) on MMLU: | Model | MMLU Accuracy | |-|-| | H128-MQA 1B (LR-tuned) | 23.76% | | H64-MQA 1B (LR-tuned) | 23.15% | | H128-MQA-CLA2 1B (LR-tuned) | 24.28% | | H64-MQA 3B (LR-tuned) | 23.32% | | H32-MQA 3B (LR-tuned) | 23.24% | | H64-MQA-CLA2 3B (LR-tuned) | 24.96% | For a point of reference, even the publicly-available open-source models TinyLlama 1.1B and OpenLlama 3B, each of which was trained with $\approx 10\times$ the FLOPs of our largest 3B training runs, do not achieve accuracy significantly better than chance on MMLU: | Publicly-Available Model | MMLU Accuracy | |-|-| | TinyLlama 1.1B | 25.34% | | OpenLlama 3B | 23.52% | ## What about continuous pretraining? We agree that continuous pretraining (or "uptraining" as the GQA paper calls it) for applying CLA to existing models is an interesting avenue of research. We have conducted some preliminary experiments on CLA uptraining. We have found that it is possible to convert 1B- and 3B-scale MQA models each trained on 100B tokens into MQA-CLA2 models with 5.7% and 4.9% higher perplexity, respectively, by uptraining them with CLA for 20B tokens. Similarly to the GQA paper, we find that initializing the uptrained model by mean-pooling KV projection weights outperforms simply dropping KV projections from the model. At the reviewers' discretion, we would be happy to include these preliminary uptraining results in the final version of the paper. We also believe it is likely possible to improve further upon our uptraining scheme, and would be happy to mention in the paper that improved schemes for CLA uptraining represent a promising direction for future research. We also note that although the original GQA paper focused on uptraining, GQA has also had a significant impact on industry practice via its direct application to LLM pretraining, as seen in models like the Llama 3 series. We hope that CLA may be able to have a similar impact via direct application to pretraining. ## What about long-context tasks? Training a high-quality long-context model with or without CLA was not possible within our resource constraints. As we mention in Section 4, we leave larger-scale evaluations of long-context, aligned models employing CLA as an interesting problem for future work. We thank the reviewer for their suggestion, and agree that in such future work, passkey retrieval would be a worthwhile task to evaluate. --- Rebuttal Comment 1.1: Comment: Thank you for your clarification. I am satisfied with the response. Good Luck!
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their comments. We address their main points below: ## Could we report more quality metrics? In addition to Wikitext perplexity, we also report accuracy scores on Hellaswag, PIQA, WinoGrande, OpenBookQA, BoolQ, and ARC-E, which can be found in Table 3 and Table 4. In aggregate, we find that these metrics show that our CLA models match the accuracy of our non-CLA baselines while using only half as much KV cache memory. In response to Reviewer 8ZcC's question, we have evaluated our 1B- and 3B-parameter models on MMLU, but find that MMLU does not provide useful signal for comparing the quality of different models at our scale of training compute, as models at this scale perform no better than chance. We provide more detail in our response to Reviewer 8ZcC. ## What about adapting pre-trained models to use CLA? We agree with reviewers 8ZcC and mqB5 that adapting (or "uptraining," as the GQA paper calls it) existing models to use CLA is an interesting avenue of research. We have conducted some preliminary experiments on CLA uptraining. We have found that it is possible to convert 1B- and 3B-scale MQA models each trained on 100B tokens into MQA-CLA2 models with 5.7% and 4.9% higher perplexity, respectively, by uptraining them with CLA for 20B tokens. Similarly to the GQA paper, we find that initializing the uptrained model by mean-pooling KV projection weights outperforms simply dropping KV projections from the model. At the reviewers' discretion, we would be happy to include these preliminary uptraining results in the final version of the paper. We also believe it is likely possible to improve further upon our uptraining scheme, and would be happy to mention in the paper that improved schemes for CLA uptraining represent a promising direction for future research. We also note that although the original GQA paper focused on uptraining, GQA has also had a significant impact on industry practice via its direct application to LLM pretraining, as seen in models like the Llama 3 series. We hope that CLA may be able to have a similar impact via direct application to pretraining. ## What about long-context tasks? Training a high-quality long-context model with or without CLA was not possible within our resource constraints. As we mention in Section 4, we leave larger-scale evaluations of long-context, aligned models employing CLA as an interesting problem for future work. ## Could we evaluate CLA on larger models? Unfortunately, we lack the resources to train 10B- and 100B-scale models from scratch using CLA, as suggested by Reviewer skpH. Even training a single model on the same parameter and data scale as Llama 1 7B would require $30\times$ the training compute of our largest 3B-scale training runs. We hope that future work will apply CLA to models at larger scale. ## Why don't we evaluate CLA in combination with MHA? We chose to spend our resource budget evaluating models combining CLA with GQA/MQA, rather than MHA, because we believe that most practitioners who would apply CLA would want to do so in combination with GQA/MQA. As we mention in Section 2.3, CLA reduces KV cache storage footprint, whereas GQA/MQA reduces both KV cache storage footprint *and* memory bandwidth requirements during decoding. Because MQA/GQA provides these dual benefits, we expect practitioners to prefer using MQA/GQA first to obtain KV cache size reductions over MHA, and to only apply CLA to obtain further reductions on top of GQA/MQA. We note that because of the efficiency benefits of GQA at inference time, many recent models, such as Llama 3 and Mixtral, use GQA rather than MHA. We chose to include the MHA model in our design space exploration just as a point of comparison, to contextualize the performance achieved by other GQA/MQA and CLA models. We would be happy to clarify this point, and the reasoning for why we did not directly combine CLA with MHA, in the final version of our paper. ## Could we include more comprehensive results tables in the paper? We thank Reviewer mqB5 for pointing out that the paper could be improved by including more comprehensive results tables. For validation perplexities, we note that validation perplexities for all model architectures in our design space exploration, including those not shown in Table 1 or Figure 3, are already included in Appendix B Table 5. For accuracies on downstream tasks, the reviewer is correct that our initial submission only included downstream accuracies for a subset of architectures. Although we include downstream accuracies for selected 1B-scale architectures (see Table 3) and all 3B architectures (see Table 4 and Appendix D Table 7), we do not include downstream accuracy results for all architectures in our 1B-scale design space exploration. We will correct this in the final submission, and include a table (likely in the appendix) containing the missing benchmark results for all architectures not currently documented. We provide this table in the supplementary PDF (attached). ## Could we compare to an open-source pre-trained model? We thank Reviewer mqB5 for their suggestion. In response to the reviewer's comment, we have run an experiment comparing directly with the open-source GQA4 model TinyLlama-1.1B at its 105B-token intermediate checkpoint. In this experiment, we pretrained our own version of TinyLlama-1.1B-105B from scratch using CLA2, using otherwise-identical training data, model architecture, and training hyperparameters as described in the TinyLlama repository. In particular, we used the same cosine learning rate schedule as TinyLlama 1.1B, which decays over 3T tokens (although we only ran training to 105B tokens). In this comparison, we find that our TinyLlama-1.1B-105B checkpoint trained with CLA2 matches or exceeds the performance of the original TinyLlama-1.1B-105B checkpoint. We present these results in more detail in the supplementary PDF (attached), and in our response to Reviewer mqB5. Pdf: /pdf/da4d0b327e38d821abc469948c3b70318f1e209e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Perceptual Fairness in Image Restoration
Accept (poster)
Summary: This paper considers fairness issue of image restoration and proposes Group Perceptual Index to measure the distance between restoration distribution and gt distribution. Experimental and theoretical results demonstrate that the superiority the proposed perceptual fairness over previous method. Strengths: 1. The fairness is important for the image restoration community and the topic is interesting to study. The proposed Group Perceptual Index can be reasonable to measure the fairness properly. 2. The paper includes solid theoretical and experimental results which provides evidence for fairness measurement. 3. The paper is well-written and easy to follow. Weaknesses: 1. The main results are mainly based face restoration. Can this method be useful for general scenario of image restoration? 2. The paper proposes a measure to detect the fairness issue. But can you suggest some potential solutions to address this problem? Technical Quality: 3 Clarity: 4 Questions for Authors: Can the method be extended to measure other fields, like image generation? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the authors discuss limitations carefully. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Can our method be useful for general-content image restoration? This is a very interesting point that definitely deserves an explanation. While the proposed GPI is indeed suitable for evaluating fairness in natural images with complex structures, fairness issues are particularly critical when dealing with human images due to their societal implications (which is why we only evaluated facial images). For example, if a general-content image restoration algorithm performs better on images with complex structures than on images of clear skies, this discrepancy is unlikely to be problematic for practitioners, as long as the algorithm attains good performance overall. Moreover, previous works (e.g., [1]) evaluated fairness with respect to non-human subjects (e.g., dogs and cats), but these studies provide limited insights into human-related fairness issues, which often arise due to subtle differences between images. Expanding our method to other datasets remains an avenue for future work. We will include a discussion about this point in the paper. ### How can we train models to achieve perceptual fairness? Please let us propose a method to train algorithms to achieve the best possible perceptual fairness when the sensitive attributes of each ground truth image in the data is known. In particular, one can train a conditional discriminator that takes the sensitive attribute as an additional input, namely, the discriminator is trained to distinguish between the ground truth images of a group (e.g., women images) and their reconstructed images (e.g., the reconstructions produced for degraded images of women). While training such a discriminator, one can add a regularization to the objective of the restoration algorithm (in an "adversarial" fashion) to minimize the deviation in the discriminator scores for different groups. For example, at each training step of the restoration algorithm, one can average the discriminator scores produced for each group and then minimize the standard deviation of the results. This will ensure that the average discriminator scores produced for the reconstructions of each group would be similar for all groups. While this proposed approach was not explored in our paper, it represents an interesting direction for training fair image restoration algorithms. We will thoroughly describe this proposed approach in our manuscript, as a suggestion for future work. ### Can our method be extended to other fields such as image generation? Yes! Thank you for raising this interesting question. The concept of GPI can indeed be extended to other fields, and in particular to image generation. For example, one can regularize a diffusion model (a "time"-dependent image denoiser) during training to attain similar GPIs for all groups. This can be achieved, e.g., by regularizing the denoiser at each time-step using the "adversarial" technique we described above ("*How can we train models to achieve perceptual fairness?*"), while making the discriminator time-dependent as well. Incorporating such a regularization would ensure that the output distribution at each time-step of the diffusion would be balanced with respect to the specified sensitive attributes. This implies that the output distribution at the end of the generation process would also be balanced. We will discuss this interesting potential avenue in our paper. #### References [1] Ajil Jalal et al. "Fairness for Image Generation with Uncertain Sensitive Attributes." Proceedings of the International Conference on Machine Learning (ICML), 2021. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Thanks for the rebuttal. The paper is interesting but also has potential unsolved/undiscussed issues. I insist on my original score.
Summary: This work introduces a new method for assessing fairness in image restoration, called the Group Perceptual Index (GPI). This measure quantifies the statistical difference between the distribution of a group's original images and the distribution of their restored versions. The authors illustrate the effectiveness of GPI by applying it to advanced face image super-resolution algorithms. Strengths: - the problem tackled in this paper is of practical importance - paper is written well - the proposed method is theoretically sound and is shown to work in meaningful ways when used on the problem space of image super-resolution Weaknesses: - Usefulness of the method is validated only on the super-resolution solution. Given the fact that the proposed method has potential to impact various image restoration algorithms, it would have been interesting to see how well it does on other image restoration application such as image denoising, deblurring etc. - It is also not clear what kind of changes to the existing super-resolution methods might result in better fairness handling. Some insights into why certain methods are not good at fairness handling as compared to others might have helped the future works. Technical Quality: 2 Clarity: 2 Questions for Authors: Please address my comments under weaknesses Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Authors has addressed the limitations to a satisfactory extent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Demonstration only on image super-resolution tasks We opted to illustrate our approach on 12 different super-resolution tasks (4 scale factors and 3 noise levels) simply because of the availability of many methods to compare against on these tasks. Note that this choice aligns with common practice in the field, as related works also focus on image super-resolution [1,2,3], which serves as a standard benchmark due to the availability of numerous algorithms for comparison. Nonetheless, we agree that validating our method on other types of image restoration tasks, such as image denoising or deblurring, or on different modalities like audio, video, or text restoration, would further substantiate its robustness and broad applicability. To address this, we will add experiments with image denoising and deblurring to the appendices and discuss the potential for future work on additional types of degradations and data modalities. ### What kind of changes to the existing super-resolution methods might result in better fairness handling? Please let us propose a method to train algorithms to achieve the best possible perceptual fairness when the sensitive attributes of each ground truth image in the data is known. In particular, one can train a conditional discriminator that takes the sensitive attribute as an additional input, namely, the discriminator is trained to distinguish between the ground truth images of a group (e.g., women images) and their reconstructed images (e.g., the reconstructions produced for degraded images of women). While training such a discriminator, one can add a regularization to the objective of the restoration algorithm (in an "adversarial" fashion) to minimize the deviation in the discriminator scores for different groups. For example, at each training step of the restoration algorithm, one can average the discriminator scores produced for each group and then minimize the standard deviation of the results. This will ensure that the average discriminator scores produced for the reconstructions of each group would be similar for all groups. While this proposed approach was not explored in our paper, it represents an interesting direction for training fair image restoration algorithms. We will thoroughly describe this proposed approach in our manuscript, as a suggestion for future work. ### Insights into why certain methods are not good at fairness handling compared to others Thank you for raising this important point. Please note that our theoretical section (Section 3) attempts to provide such insights. For example, we show that common image restoration algorithms, such as the MMSE point estimate or the posterior sampler, may often lead to poor perceptual fairness. This means that perceptual fairness is not "trivially" acquired by common algorithms. Thus, it is interesting to ask: 1. Under which circumstances can some algorithm achieve perfect GPI for all groups simultaneously (Theorem 2)? 2. Otherwise, when perfect GPI cannot be achieved for all groups simultaneously, under which circumstances can some algorithm achieve perfect perceptual fairness? (Theorems 3 and 4). For example, from Theorem 4 we learn that, among all the algorithms that achieve perfect Perceptual Index (PI), the better ones in terms of perceptual fairness are those that excel on the "toughest" groups, and this is not directly achieved by methods that simply attain perfect PI (e.g., posterior sampling). #### References [1] Ajil Jalal et al., "Fairness for Image Generation with Uncertain Sensitive Attributes." Proceedings of the International Conference on Machine Learning (ICML), 2021. [2] Yochai Blau and Tomer Michaeli, "The Perception-Distortion Tradeoff." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018. [3] Guy Ohayon et al., "The Perception-Robustness Tradeoff in Deterministic Image Restoration." Proceedings of the International Conference on Machine Learning (ICML), 2024. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for all the clarifications made in the rebuttal. Rebuttal has addressed most of my concerns, hence I will upgrade my original score.
Summary: This study presents a novel method to evaluate fairness in image restoration using the Group Perceptual Index (GPI). GPI quantifies the statistical disparity between a group's original images and their restored versions. Fairness is assessed by comparing GPIs across multiple groups, striving for perfect Perceptual Fairness (PF) where all GPI values are identical. The research provides theoretical insights into this innovative fairness concept, drawing comparisons to existing frameworks, and showcases its practical implementation through advanced face image super-resolution algorithms. Strengths: 1. The paper is well-structured and includes sufficient theoretical explanations. 2. The concept of GPI is logically sound. 3. The paper demonstrates that the proposed method outperforms other baseline methods. 4. The paper thoroughly discusses both the advantages and limitations of the proposed method. The advantages highlight the method's effectiveness and potential benefits, while the limitations are clearly outlined, providing a balanced view of its capabilities and areas for improvement. Weaknesses: 1. The authors introduce a novel method to evaluate the fairness of image restoration. However, it is important to note that this method has been validated exclusively on image super-resolution tasks. Further validation on other types of image restoration tasks would be beneficial to demonstrate its broader applicability and robustness. 2. How can sensitive attributes be detected and acquired? The impact of sensitive attributes deserves an in-depth discussion. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the primary difference between fairness evaluation and standard image quality assessment metrics like PSNR and SSIM? 2. How does this method contribute to the design of fair and unbiased image restoration algorithms? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations have been thoroughly discussed and adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Demonstration only on image super-resolution tasks We opted to illustrate our approach on 12 different super-resolution tasks (4 scale factors and 3 noise levels) simply because of the availability of many methods to compare against on these tasks. Note that this choice aligns with common practice in the field, as related works also focus on image super-resolution [1,2,3], which serves as a standard benchmark due to the availability of numerous algorithms for comparison. Nonetheless, we agree that validating our method on other types of image restoration tasks, such as image denoising or deblurring, or on different modalities like audio, video, or text restoration, would further substantiate its robustness and broad applicability. To address this, we will add experiments with image denoising and deblurring to the appendices and discuss the potential for future work on additional types of degradations and data modalities. ### How can sensitive attributes be detected and acquired? Thank you for raising this interesting question. Please note that our work assumes that the sensitive attributes are prespecified, which is the case handled by most works that tackle fairness concerns. It should be noted that the term "sensitive" here is application dependent. For example, in one application gender may be considered the only sensitive attribute, whereas in another application age may be considered sensitive as well. This is despite the fact that both applications use the precise same data for training. Thus, sensitive attributes cannot be detected automatically from the data. Rather, they should be specified by the user according to the particular societal concerns in question. We will discuss this interesting point in the manuscript. ### The difference between GPI and standard metrics like PSNR We appreciate this question and address it in Appendix G.5 (L794 onwards), where we show that such metrics are not good indicators of perceptual bias. Indeed, our experiments (Figures 8-10 in the appendix) show that, while the different groups attain roughly the same distortion (e.g., PSNR, LPIPS), their GPIs differ significantly, a discrepancy that is also visually evident. ### How can our method contribute to the design of fair and unbiased image restoration algorithms? Please let us propose a method to train algorithms to achieve the best possible perceptual fairness when the sensitive attributes of each ground truth image in the data is known. In particular, one can train a conditional discriminator that takes the sensitive attribute as an additional input, namely, the discriminator is trained to distinguish between the ground truth images of a group (e.g., women images) and their reconstructed images (e.g., the reconstructions produced for degraded images of women). While training such a discriminator, one can add a regularization to the objective of the restoration algorithm (in an "adversarial" fashion) to minimize the deviation in the discriminator scores for different groups. For example, at each training step of the restoration algorithm, one can average the discriminator scores produced for each group and then minimize the standard deviation of the results. This will ensure that the average discriminator scores produced for the reconstructions of each group would be similar for all groups. While this proposed approach was not explored in our paper, it represents an interesting direction for training fair image restoration algorithms. We will thoroughly describe this proposed approach in our manuscript, as a suggestion for future work. #### References [1] Ajil Jalal et al., "Fairness for Image Generation with Uncertain Sensitive Attributes." Proceedings of the International Conference on Machine Learning (ICML), 2021. [2] Yochai Blau and Tomer Michaeli, "The Perception-Distortion Tradeoff." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018. [3] Guy Ohayon et al., "The Perception-Robustness Tradeoff in Deterministic Image Restoration." Proceedings of the International Conference on Machine Learning (ICML), 2024. --- Rebuttal Comment 1.1: Comment: Considering the feedback from other reviewers and the authors' responses, I retain the original rating of weak accept.
Summary: This paper reveals that the conventional definition of fairness for image restoration is restrictive and often causes controversy. To address this issue, the authors introduce a new approach to measure fairness in image restoration tasks by proposing the Group Perceptual Index (GPI). Specifically, they propose assessing the fairness of an algorithm by comparing the GPI of different groups, where perfect Perceptual Fairness (PF) is achieved if the GPIs of all groups are identical. They theoretically study this notion of fairness and demonstrate its utility on state-of-the-art face image super-resolution algorithms. Strengths: 1. The paper reveals the existing fairness measures such as Representation Demographic Parity (RDP) and highlights their limitations. It shows that these measures can be overly simplistic and may not detect subtle biases that affect different groups. 2. The paper proposes the Group Perceptual Index (GPI) as a measure of fairness in image restoration, which is a novel and significant contribution. 3. It provides a theoretical analysis of the properties of GPI and its relationship to other fairness measures. 4. The authors use a variety of datasets and experimental setups to demonstrate the effectiveness of GPI, which are convincing. Weaknesses: 1. Group Perceptual Index (GPI) also increases the complexity of the evaluation process of image restoration algorithms, compared with the traditional fairness method, because it involves comparing the distributions of different groups. 2. The experiments use synthetic datasets generated from high-quality, aligned face image datasets like CelebA-HQ. 3. The paper only evaluate the proposed method on the face dataset and does not provide the results on other kinds of image data. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. The paper shows that achieving perfect GPI for all groups is often not feasible, especially under severe degradation conditions. it leaves an open question of how to balance fairness among different groups in practice best when perfect GPI cannot be achieved. 2. In addition to face synthesis, is the proposed GPI suitable for natural images with complex structures? 3. The application scenes are not clear. How do we apply the proposed GPI to real-world applications? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The paper rethinks the fairness in image restoration and proposes a novel method, called Group Perceptual Index (GPI), to measure the fairness for image restoration models. The proposed method can effectively detect subtle and malicious biases enhances the robustness and security of image restoration systems Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Complexity of evaluating perceptual fairness Thank you for raising this important point. We acknowledge that computing the GPI of each group increases the complexity of evaluating fairness compared to previous methods, which typically compute the classification hit rates (e.g., counting the reconstructed images classified as having a specified sensitive attribute). However, it is important to note that both our method and previous methods require each sample to be processed through a classifier. In our method, the classifier is used to extract deep image features to compute metrics such as FID and KID, whereas in previous methods the classifier evaluates the predicted sensitive attributes of each image. Thus, the additional computational overhead of our method arises from two factors: 1. Extracting deep features from the source images in addition to the reconstructed images, effectively doubling the computation required for feature extraction compared to previous methods. 2. Approximating a statistical divergence (e.g., FID) between the extracted features of the ground truth images and the extracted features of the reconstructed images. This additional step introduces some computational overhead (e.g., computing the empirical mean and covariance matrix of the samples, as in FID), but it is relatively minor in practice compared to the benefits of our more nuanced fairness evaluation. We will point this limitation in our paper. ### Why do we use synthetic datasets? We appreciate the reviewer's concern regarding our use of synthetic data sets. Please note that, as discussed in Section 4.1 of our paper, we opted to use synthetic, high-quality datasets because existing face datasets often lack ground truth labels for sensitive attributes such as ethnicity, age, and gender. Moreover, the datasets that do include such labels are typically imbalanced w.r.t. these attributes. To adequately approximate and compare the GPI of different groups, it is essential to have equal amounts of data from each group. Otherwise, approximating the GPI with metrics like FID would lead to completely different scoring scales for varying sample sizes (see, e.g., Figure 1 in [1], which shows that FID suffers from bias when using small sample sizes). Generating synthetic datasets allows us to control the number of images from each group, and therefore allows to ensure balanced and fair evaluations. ### Applicability to other types of data besides facial images This is a very interesting point that definitely deserves an explanation. While the proposed GPI is indeed suitable for evaluating fairness in natural images with complex structures, fairness issues are particularly critical when dealing with human images due to their societal implications (which is why we only evaluated facial images). For example, if a general-content image restoration algorithm performs better on images with complex structures than on images of clear skies, this discrepancy is unlikely to be problematic for practitioners, as long as the algorithm attains good performance overall. Moreover, previous works (e.g., [2]) evaluated fairness with respect to non-human subjects (e.g., dogs and cats), but these studies provide limited insights into human-related fairness issues, which often arise due to subtle differences between images. Expanding our method to other datasets remains an avenue for future work. We will include a discussion about this point in the paper. ### How to balance fairness among different groups in practice? Please let us propose a method to train algorithms to achieve the best possible perceptual fairness when the sensitive attributes of each ground truth image in the data is known. In particular, one can train a conditional discriminator that takes the sensitive attribute as an additional input, namely, the discriminator is trained to distinguish between the ground truth images of a group (e.g., women images) and their reconstructed images (e.g., the reconstructions produced for degraded images of women). While training such a discriminator, one can add a regularization to the objective of the restoration algorithm (in an "adversarial" fashion) to minimize the deviation in the discriminator scores for different groups. For example, at each training step of the restoration algorithm, one can average the discriminator scores produced for each group and then minimize the standard deviation of the results. This will ensure that the average discriminator scores produced for the reconstructions of each group would be similar for all groups. While this proposed approach was not explored in our paper, it represents an interesting direction for training fair image restoration algorithms. We will thoroughly describe this proposed approach in our manuscript, as a suggestion for future work. ### How do we apply the proposed GPI to real-world applications? Many practical imaging systems should ensure that their algorithms do not introduce or amplify biases against any particular group. Mobile phones, for example, are used by everyone, and all of them incorporate image restoration algorithms within the Image Signal Processing (ISP) pipeline to produce a high-quality image from the given sensor measurements. When the degradation is sufficiently severe (e.g., in low-light conditions), even a well-performing image restoration algorithm (e.g., posterior sampler) may treat some groups better than others. Thus, it is important to evaluate the fairness of such algorithms. For example, the GPI can help identify subtle biases that traditional methods might overlook, thereby alerting for fairness issues in these systems. #### References [1] Mikołaj Bińkowski et al. "Demystifying MMD GANs." Proceedings of the International Conference on Learning Representations (ICLR), 2018. [2] Ajil Jalal et al., "Fairness for Image Generation with Uncertain Sensitive Attributes." Proceedings of the International Conference on Machine Learning (ICML), 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your responses and comprehensive explanation. Your responses partially resolve my concerns, so I keep my original score unchanged. Overall, I like this paper and appreciate the efforts of the authors on the fairness of image restoration, which is inspiring.
Rebuttal 1: Rebuttal: # Thank you! We are deeply grateful to all the reviewers for dedicating their time to evaluate our paper. The feedback we received has been highly encouraging and has helped us improve the quality of our work.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Identifiability Guarantees for Causal Disentanglement from Purely Observational Data
Accept (poster)
Summary: The authors propose a method to identify causal and exogenous variables in Gaussian Additive Noise Models from purely observational data. Strengths: The paper proposes a novel approach (the setting might have been largely considered elsewhere; see my comments in **Weaknesses** and **Questions**). - the proofs (I checked a significant part of the appendix, but not everything) seem to be sound - the identifiability notion up to upstream layers is an interesting concept (needs to be better discussed what it means and how it compares to other notions) I chose a conservative score; however, if the authors address my concerns (mostly about making statements more precise) in a satisfying manner, I will increase my score. Weaknesses: ## Major points - Your statement _our work is the first to establish identifiability guarantees in the purely observational setting without imposing any structural assumptions over the mixing function_ seems to be missing some nuance: - assuming an ANM is a structural assumption (and, at least from a theoretical perspective, a rather strong one) - you cite [32], which shows that nonlinear ICA can also uncover the causal graph. As I consider nonlinear ICA methods to be purely observational, and as [32] does not assume additivity, this seems to contradict your claim (as, in that case, you would get the DAG and the exogenous variables). Though in [32], there is no level corresponding to $H$ -- if this is the novely, then please be more precise. - optimizing the Jacobian is computationally very expensive; I haven't found any discussion on this aspect - a more detailed discussion on delineating the contribution would be highly suggested (from both score-based methods and [32]) - I also encourage the authors to address the (limitations of) identifiability up to upstream layers; though, as I wrote above, find the concept interesting, my understanding is that this is a very limited identifiability notion ## Minor points - line 93: "principal" -> "principle" - use different $C, P_\pi$ for Defs 1 and 2 - the definition of, e.g., $\beta$ in Lem 1 comes a bit late; this makes the reader wonder what it stands for - resolve each abbreviation and symbol in each Table and Figure caption, even if you defined those quantities in the main text Technical Quality: 3 Clarity: 3 Questions for Authors: - What exactly do you mean by asymmetries? I only found the related line 46, and that does not seem to be any novel key insight, so I am assuming I am missing your point. - What do you exactly mean by causal disentanglement? - Why do you consider Gaussian exogenous variables when that causes the (unsurprising) impossibility result in 3.3 - Can you use insights from score-matching-based methods to improve your identifiability results (those methods can easier identify leaf nodes, whereas your method works better for root nodes; I reckon you draw some connections, but I could not find this specific aspect) - In Fig. 4, what is the MAC for SSM and STEIN? Am I interpreting it correctly that it can be everything? - How does identifiability up to layers (particularly, distinguishing leaf and non-leaf variables) relate to block-identifiability (if it does relate), or, particularly, to content and style latent variables (cf. [1]) ## References - [1] Von Kügelgen, J., Sharma, Y., Gresele, L., Brendel, W., Schölkopf, B., Besserve, M., & Locatello, F. (2021). Self-supervised learning with data augmentations provably isolates content from style. Advances in neural information processing systems, 34, 16451-16467. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed broader impact. I asked for clarifications regarding computational and novelty aspects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review! We appreciate that you found our proofs sound and our newly defined notion of identifiability to be interesting. We would like to address your comments below: > **“What do you exactly mean by causal disentanglement?”** Following this review (https://arxiv.org/abs/2206.15475), we define causal disentanglement to be the process of learning about latent variables $Z$ from the set of observational variables $X$, such that $X = g(Z)$ for some mapping $g$, and $Z$ factorizes as $p(Z) = \prod_{i=1}^n p(Z_i | Z_{pa(i)}, \epsilon_i)$, where $\epsilon_i$ is an exogenous noise term associated with $Z_i$. We additionally note that this problem has also been called “causal representation learning” in the literature. We will refine Section 2 to make this more clear. > **“you cite [32] … there is no level corresponding to H -- if this is the novely, then please be more precise.”** > **“a more detailed discussion on delineating the contribution ...”** Thank you for your suggestion. We would like to clarify that prior score-based methods (including [32]) consider the problem of causal structure learning when all causal variables are _fully observed_, while we consider the more difficult problem of causal disentanglement, where the causal variables are _inherently latent and can only be viewed through an unknown mixing function_, which we denote by $H$. Using the notation defined in our paper, [32] would assume direct access to the latent variables, $Z$, for which we derive methods to learn representations of. We will add a more thorough discussion in Section 1.1 indicating this distinction between our contributions. > **“assuming an ANM is a structural assumption…”** We wanted to distinguish structural assumptions from functional assumptions as restrictions to _just the mapping from the latent to the observed space_ as defined by the mixing function. For clarity, previous works in purely observational causal disentanglement rely on structural assumptions on this mapping, the most common one being the pure child assumption ([11,14,19,52,53] in the manuscript), assuming that each latent variable has multiple observational variables for which they are the only parent. We in contrast allow any observational variable to be determined by any subset of latent variables. We briefly discussed this in lines 75-81, but we will revise this discussion to make it more clear. > **“optimizing the Jacobian is computationally very expensive; I haven't found ...”** Thank you for this comment. We acknowledge that optimizing the Jacobian in Eq (1) can be computationally laborious. We briefly discussed this difficulty in lines 217-220 in Section 4.1, which is followed by our proposed solution. We also discussed the difficulty of estimating the Jacobian in lines 275-281 in Section 5.1. We will add pointers to these places in the introduction. > **“..encourage the authors to address the (limitations of) identifiability up to upstream layers …”** > **“Can you use insights … to improve your identifiability…”** Thank you for your suggestion. The primary limitation of identifiability up to upstream layers is that variables which are more downstream in the underlying causal graph have weaker identifiability guarantees. Conversely, variables that are more upstream in the causal graph, or more influential in other words, have stronger identifiability guarantees. As shown in Section 3.3, these identifiability results can _not_ be improved without further assumptions. It is currently unclear what minimal additional assumptions are required to achieve stronger identifiability guarantees. We note this as an important question for future work, and we will explicitly add this discussion to Section 6. > **“What exactly do you mean by asymmetries?”** Asymmetries refers to the asymmetric relationships between causes and effects present in this model. More specifically, we will have Gaussian residuals if we regress the effect over causes, but not the other way around. These relationships are useful in establishing causal directions, which we utilize to learn the causal layers of the underlying graph. > **“Why do you consider Gaussian exogenous variables when that causes …”** The Gaussian noise assumption is common for nonlinear additive noise models given their nice theoretical properties that allow for the whole causal graph to be learned when all causal variables are observed (c.f., [33] referred in the manuscript). Our theoretical proofs similarly rely on this Gaussianity assumption, which is why we consider them even though they cause the specified impossibility result. Considering whether stronger identifiability results could be achieved under non-Gaussianity assumptions is an interesting direction for future work. > **“In Fig. 4, what is the MAC for SSM and STEIN? …”** The text “SSM” and “STEIN” in Figure 4 are intended to be labels of the black and red vertical lines indicating these methods’ average signal-to-error ratio when used to estimate the pointwise Jacobians of the score, which is 6 and 2 dB respectively. We will make these labels more clear with a legend in our revision. The MAC of the exogenous noise estimates using these estimation methods for various sample sizes is depicted in Table 2. > **“… relate to block… content and style latent variables”** Since identifiability up to layers means that each causal variable can be identified up to transformation of itself and all variables in or above its layer, this means that we can achieve block-identifiability of each variable and all variables in or above its layer. In particular, this means the root variables can be identified up to their own block. For content and style variables, as content variables are in the layers above style variables, they can be identified up to their own block. --- Thank you for the minor points as well. We will revise accordingly. --- Rebuttal Comment 1.1: Title: Score change 4->5 Comment: Thank you for your detailed response! - I'd consider using causal representation learning, as it seems to me to be a more accepted and precise notion (disentanglement in general representation learning is usually a vague concept) - I'd consider using "graphical" instead of "structural." As another reviewer pointed out, using "structural" is presumably very confusing. Nonetheless, assuming ANMs is still a strong assumption --- Rebuttal 2: Title: Response to comment Comment: Thank you for the discussion and for updating the scores! Thanks for the suggestion! We will revise the manuscript to clarify the concepts regarding (1) causal representation learning and causal disentanglement, and (2) structural restrictions and graphical restrictions. Our usage and initial thoughts aim to align with recent prior works in this area (e.g., the usage of causal disentanglement in [1], and structural restrictions in [2]). Nonetheless, we will clarify these concepts to improve readability. In addition, we would like to comment on why we believe that while ANM poses certain assumptions, it can still be a useful model in many scenarios: **The Additive Noise Model Assumption** When compared to alternative assumptions (e.g., independent components, which are a special case of ANM) in identifiable representation learning, we believe that ANM is relatively less restrictive, as it allows dependencies between the latent components. These dependencies are important, in our opinion, as the latent factors should be related to each other. Additionally, in the causal structure learning setting where all causal variables are observed, ANMs are frequently used because (1) their theoretical properties are well understood, (2) efficient methods exist for learning them, and (3) they can fit non-parametric relationships, making them flexible for modeling many real-world causal systems, such as gene regulatory networks [3]. Therefore we believe that it would be helpful to devise understandings and methods to extend these models to the causal representation learning setting, where we can handle perceptual and non-tabular observations. --- References: [1] Causal Machine Learning: A Survey and Open Problems \ [2] Linear causal disentanglement via interventions \ [3] Estimation of genetic networks and functional structures between genes by using Bayesian networks and nonparametric regression --- Rebuttal Comment 2.1: Title: Score re-evaluation Comment: Thanks for the detailed response. I went through your earlier response again, and in this light, I modified my score to 6. --- Reply to Comment 2.1.1: Title: Response Comment: Thank you again for the discussion! We really appreciate the suggestions.
Summary: This paper concerns itself with the problem of causal disentanglement from purely observational data. The setting is that data X is generated as X = H.Z where H is a linear matrix and Z are latent variables that follow a nonlinear additive Gaussian noise model. In this setting, it is shown that the latent variables can be identified upto upstream layers (defn 1), when given access to infinite data. The idea is to compute the score functions (gradient of log-likelihood) and use their structural properties to recover the layers. This is then converted into an algorithm, which computes the desired score functions, using standard score-estimation methods and an application of quadratic programming to identify the model. The technique is validated numerically via simulated experiments. This setting is similar to [1] however interventional data is assumed to not be available, however, the identifiability results seem weaker as expected. However, the claims made in the paper seem too strong (e.g. this is not the first paper to consider the purely observational setting, see [2]) and the proof techniques also seem to have appeared in prior works such as [3] (see weaknesses). ### References: - [1] Score-based causal representation learning with interventions. B. Varici, E. Acarturk, K. Shanmugam, A. Kumar, A. Tajer. - [2] Identifiability of deep generative models without auxiliary information. B Kivva, G Rajendran, P Ravikumar, B Aragam - [3] Score matching enables causal discovery of nonlinear additive noise models. P Rolland, V Cevher, M Kleindessner, C Russell, D Janzing, B Schölkopf, F Locatello Strengths: - Compared to a flurry of recent works that assume existence of interventional data to recover latent variables in data, this work studies purely observational data, which is a more useful setting. - The notion of identifibility upto layer-wise transformations is novel to the best of my knowledge and could potentially be useful elsewhere. Weaknesses: - L184/L284 claims there are no structural results on the mixing function, however assumption 1 assumes that the mixing function is linear, which is a pretty big restriction. This weakens the claim of this paper. - L81-82 says "our work is the first to establish identifiability guarantees in the purely observational setting without imposing any structural assumptions over the mixing function". However, the prior work [2] also considers the purely observational setting and the assumption they make on the mixing function is piecewise-linearity, which is much weaker than the linearity assumption made here. That makes this claim not valid, could the authors clarify this point? - This work seems to be an extension of [3] where an additional linearity is added. The lemmas and proofs also seem similar. Could the authors describe the main differences in the assumptions and/or additional difficulties in the proof? - Experiments are on synthetic data. While they weakly validate the theoretical results, an application to real-life would be needed to see if these ideas extend to practice. Technical Quality: 2 Clarity: 2 Questions for Authors: Some questions were raised above. - Identifiability up to layers seems a bit hard to grasp. While theoretically true, what does it mean for the practitioner? - Typo in L26: extend -> extent Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: Limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for appreciating our problem setting and for recognizing the novelty of our defined notion of identifiability. We would like to address your concerns and questions below: > **“L184/L284 claims there are no structural results on the mixing function, however assumption 1 assumes that the mixing function is linear … ”** To clarify, we consider the assumption of linear mixing between the latent and observed variables to be a _functional_ assumption rather than a _structural_ restriction, which refers to hard limits on the sets of latent variables that can determine each observational variable. In our setting, we allow any subset of latent variables to determine any observational variable. This is in contrast to previous works in purely observational causal disentanglement, which rely on the pure child assumption ([11,14,19,52,53] in the manuscript), assuming that each latent variable has multiple observational variables for which they are the only parent. We note that we chose to assume a linear mixing as it is essential to the proofs of our theoretical guarantees. However, our results additionally hold when the true mixing function can be reduced to linear mixing, such as in the case of polynomials being reduced to linear mappings (c.f., [2,57] in the manuscript). > **“L81-82 says "our work is the first to establish identifiability guarantees in the purely observational setting without imposing any structural assumptions over the mixing function". However, the prior work [2] …”** Thank you for raising this concern. The setting considered in [2] is very different from the problem setting in our paper. While [2] considers the purely observational setting, their results are dependent on the assumption that the latent variables are distributed according to a Gaussian mixture model, which does not specifically model the causal graph, and does not encapsulate the nonlinear additive Gaussian noise model that we consider. Therefore, the results in [2] are not transferable to our setting, and our original claim in L81-82 holds since we consider the setting of causal disentanglement. We will refine the claim in lines 81-82 to clarify this. > **“This work seems to be an extension of [3] where an additional linearity is added… describe the main differences in the assumptions and/or additional difficulties in the proof?”** You are correct that our work is an extension of [3]. However, the important distinction between our works is that [3] considers the setting where the causal variables are fully observed, while we consider the more difficult setting where the causal variables are latent and can only be viewed through an unknown mixing function, which is where the additional difficulties lie. The proofs in [3] utilize variance properties on the diagonal elements of the Jacobian over the score of the causal variables to derive a topological ordering. While we utilize this result, it is not sufficient given we don't have access to the specified causal variables. In Lemma 1, we prove that we can only estimate this desired Jacobian up to an unknown quadratic parameterized by the matrix $\beta$, where $J_{\hat{Z}} (\hat{z}) = \beta^{T} J_Z(z) \beta$. We therefore must derive additional information about the whole Jacobian, not just the diagonal as in [3]. In Lemma 2, we derive properties of the estimator for the unknown matrix $\beta$ such that $J_{\hat{Z}} (\hat{z})$ has zero variance terms in its diagonal. We then take it a step further in Lemma 3 and Theorem 1, to demonstrate how these properties on $\beta$ allow us to derive layer-wise representations, by maximizing the number of zero-variance terms in the diagonal of the Jacobian. This gives rise to a simple principle in Eq (1) to achieve identifiability, which serves as foundation of our algorithm. > **“Experiments are on synthetic data .. an application to real-life would be needed to see if these ideas extend to practice.”** We thank the reviewer for this comment. Similar to [46,18,9] in our manuscript, we view this paper as having a primarily theoretical contribution, in which our experiments on synthetic data provide a convincing proof-of-concept for our main results. We acknowledge the importance of real-world experiments and do believe there are many real-world scenarios, such as topic modeling in natural language for example, that could be interesting settings to evaluate our methods. In particular, the method could be used to identify hierarchical topics at different layers of the underlying causal structure. Such experiments would however necessitate an extensive amount of additional thought in experimental set up, which we believe is out of the scope of our current work, which mainly focuses on theoretical guarantees. > **“Identifiability up to layers seems a bit hard to grasp… what does it mean for the practitioner?”** For causal systems which have an inherent hierarchical structure, upstream layer representations will capture all of the information used in determining each particular level of variables. This can be best explained with an example. Consider we wish to disentangle information about a latent genealogical tree, which we don’t have the ability to intervene on as it is no longer active. Intuitively, each layer representation would contain all of the prior ancestral information used to determine a given generation's traits, where the top layer would only contain information about the original ancestors and the bottom layer representation would contain all hereditary information. The additional layer-wise noise representations would capture each generation’s level of exogenous noise, providing an understanding of how traits could have spawned over time. We will add this particular example to the introduction to build intuition for how our work can be useful beyond just the theoretical guarantees. > **“Typo in L26: extend -> extent”** Thank you for identifying this. We will revise accordingly. --- Rebuttal 2: Title: Score update Comment: Thank you for your response. I understand the author’s clarifications on the differences between the works that I have cited, but my concern is more with the framing of the contributions and it still seems like overselling of the work even if there are clear technical differences from prior works. While there is some intuition behind identifiability upto layers that the authors give, I still find it hard to understand why it would be useful either theoretically or in practice. Thanks for the other technical clarifications as well, I have increased my score. --- Rebuttal Comment 2.1: Title: Response to further points Comment: Thank you for the response and the further comments. We are glad that we are able to make the technical clarifications. Regarding the remaining concerns about the framing of the contributions, we really appreciate the comments and will try to clarify it via the following changes: 1. Causal disentanglement in the purely observational setting: in the related work section on causal disentanglement (line 81-82), we stated that “our work is the first to establish identifiability guarantees in the purely observational setting without imposing any structural assumptions over the mixing function”. We will modify this claim to “our work establishes identifiability guarantees of causal disentanglement in the purely observational setting, without imposing any structural assumptions over the mixing function”. In addition, we will add a discussion paragraph to clarify that identifiability of latent factors in the purely observational setting has been considered outside of causal disentanglement, such as in the work [2] mentioned by the reviewer. 2. Score-based approaches in causal discovery vs. causal disentanglement: in the related work section on “score matching in causal discovery” (line 91-92), we commented that “extending these ideas to causal disentanglement is difficult, since we do not observe the latent factors and can only estimate the log-likelihood of the observed variables”. We will add in the technical differences of our proof versus the original proof in the causal discovery setting. In particular, we will add a pointer to section 3.2 (which is where our main lemmas and theorems sit), in which we will incorporate our technical clarification in the earlier response: “The proofs in [3] utilize variance properties on the diagonal elements of the Jacobian over the score of the causal variables to derive a topological ordering. While we utilize this result, it is not sufficient given we don't have access to the specified causal variables. In Lemma 1, we prove that we can only estimate this desired Jacobian up to an unknown quadratic parameterized by the matrix , where …” We hope that these will be sufficient to clarify our contributions and avoid overselling it. Thank you for the other comments as well. In response to it, we would like to give our perspectives regarding why we believe identifiability up to layers might still be useful albeit its limitations: in the emerging field of causal disentanglement, full identifiability of latent causal model is not possible without additional assumptions, which is why many works proposed to consider, e.g., structural or sparsity restrictions on the mixing function, or access to single-node interventions. However, as we tried to illustrate in lines 39-44, these assumptions might be limiting and impractical in many settings, which is why we choose to step back and study what can be identified without interventions or structural restrictions. We choose to study nonlinear additive noise models, as they inherit the nice theoretical properties in the causal discovery setting and allow modeling of non-parametric causal mechanisms. In this case, we attain a full theoretical understanding of what can learned, by showing partial identifiability of up to causal layers, which cannot be improved without additional assumptions. Practically, this would mean that more upstream variables in a hierarchical causal structural can be disentangled easier. For example, in the context-style model in [4], our results show that the context variable can be identified up to themselves. --- References: [4] Self-supervised learning with data augmentations provably isolates content from style.
Summary: This paper studies the identifiability issue of causal disentanglement from observational data, within the setting of nonlinear causal model with additive Gaussian noise and linear mixing. An interesting result is that the causal variables can be identifiable at most up to a layer-wise transformation, based on a recent score-based causal discovery method. A practical algorithm is then proposed. Strengths: - The paper reconsiders the fundamentals of an import problem, which is very meaningful. - The theory, in particular the concept of layer-wise identifiability, in new and motivating. - The proposed method is practical. - The paper is well-written and it is enjoyable to have a read. Weaknesses: A key motivation of the paper relies on the reasonability of assuming interventions on latent factors. Then this paper assumes non-linear function with Gaussian additive noise and linear mixing, but these assumptions cannot be tested from observations, either. In other words, these assumptions are, in my view, alternative assumptions on the same problem, but do not relax the previous assumptions. Besides, in this setting, the dimension of latent factors is assumed to be known, which may be a limitation. Technical Quality: 3 Clarity: 4 Questions for Authors: Overall, the paper studies a very interesting and meaningful problem. Within the considered setting, quite some interesting results are obtained. My questions are: 1. As mentioned in the weakness part, this paper assumes non-linear function with Gaussian additive noise and linear mixing, but these assumptions cannot be tested from observations, either. In other words, these assumptions are, in my view, alternative assumptions on the same problem, but do not relax the previous assumptions. Besides, in this setting, the dimension of latent factors is assumed to be known, which may be a limitation. Can you clarify on this? 2. the key part of identifiability theory seems to rely on the recent technique of score-based causal discovery method. In your setting, the latent factors themselves are identifiable before the linear mixing. I am wondering if other forms identifiable SCMs for the latent variables can also be identified? 3. Regarding Proposition 1.: while a counter example is sufficient for this theorem, I'd like to ask if this failure case can be avoided by putting additional assumptions? e.g., what if we put a technical condition that $a_1b_1\sigma_1^2+a_2b_2\sigma_2^2\neq 0$ or that $a_1, b_1, a_2, b_2>0$.? 4. typo: line 26: extend -> extent Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging review! We appreciate you recognizing both the importance of our work and the merit of our theoretical findings and algorithmic approach. We would like to address your additional comments and questions below: > **“…this paper assumed non-linear function with Gaussian additive noise and linear mixing, but these assumptions cannot be tested from observations, either. In other words, these assumptions are, in my view alternative assumptions on the same problem, but do not relax the previous assumptions.”** We thank the reviewer for this comment, and we take this opportunity to give more intuition on our model assumptions and clarify the difference between them and assumptions on interventions. Although it may seem like a strong assumption to consider a non-linear additive Gaussian noise model as the representation of the underlying causal network, we note that this model type is frequently assumed in causal inference given their theoretical properties are well understood and there exist many methods to learn their structure in the fully observable setting [1-4]. Additionally, these models are known to be more flexible than linear additive noise models given their ability to fit non-parametric relationships, making them a commonly assumed model for many real-world causal systems such as gene regulatory networks [5] without needed verification. For the mixing function, we chose to assume a linear mixing as it is essential to the proofs of our theoretical guarantees. However, our results also hold when the true mixing function can be reduced to linear mixing using existing techniques, such as in the case of polynomials being reduced to linear mappings (c.f. [6,8]). It would be interesting in future work to understand to what other settings our results can be extended. We recognize that there are different levels of assumptions made on the latent model and the mixing function, but we do not consider access to interventions as alternative assumptions on the same problem. While our assumptions are restricting _the model class of the data-generating process_, the assumption of interventions assume _existence of additional data / environments_, which is intrinsically different. This is also why we separate and highlight the assumption on “_data_” from “_latent model_” and “_structural mixing_” in Table 1 of our paper. However, we want to clarify that we do not consider our work as a strict relaxation of prior works, as the works which assume interventions usually compare multiple environments and might arrive at stronger identifiability results by imposing assumptions on the number / types of interventions. > **“Besides, in this setting, the dimension of latent factors is assumed to be known, which may be a limitation.”** Thank you for this comment. We would like to clarify that we do _not_ assume the dimension of the latent vector is known. Given we consider linear mixing, we are able to solve for the latent dimension $n$ by solving for the smallest integer $\hat{n}$ such that there exists a full column rank matrix $\tilde{H} \in \mathbb{R}^{d \times \hat{n}}$ where $\tilde{Z}= \tilde{H}^{\dagger} X$ has an open support in $ \mathbb{R}^{\hat{n}}$, similar to Lemma 1 in [6]. We will add this information to Section 2 for clarity. > **“the key part of identifiability theory seems to rely on the recent technique of score-based causal discovery method. In your setting, the latent factors themselves are identifiable before the linear mixing. I am wondering if other forms identifiable SCMs for the latent variables can also be identified?”** In considering other works that utilize score-matching to identify SCMs in the fully observable setting, we believe that our theoretical result could be extended to learn the upstream layer representations of nonlinear additive models with generic noise as an extension of [7], by modifying the principal to achieve identifiability in Eq (1) of our Lemma 3 to accommodate generic noise. The practical algorithm needs to be adapted accordingly as well. We will add this discussion to the paper. > **“Regarding Proposition 1.: while a counter example is sufficient for this theorem, I'd like to ask if this failure case can be avoided by putting additional assumptions? e.g., …”** We appreciate this question regarding conditions for further identifiability. With additional faithfulness assumptions on the data generating process, we can potentially further identify the latent variables beyond "up to upstream layers". For example, assuming a generalized notion of faithfulness and access to soft interventions, [6] demonstrates that the underlying causal graph can be identified up to transitive closure. The mentioned condition by the reviewer can serve as one such faithfulness assumption for the 2-variable scenario. However, it is currently unclear what minimum faithfulness assumptions are required to achieve stronger identifiability guarantees in general scenarios, which we view as an important question for future work. > **“typo: line 26: extend -> extent”** Thank you for identifying this. We will revise accordingly. --- References: [1] Nonlinear causal discovery with additive noise models \ [2] Causal discovery with continuous additive noise models \ [3] Score matching enables causal discovery of nonlinear additive noise models \ [4] CAM: Causal additive models, high-dimensional order search and penalized regression \ [5] Estimation of genetic networks and functional structures between genes by using Bayesian networks and nonparametric regression \ [6] Identifiability guarantees for causal disentanglement from soft interventions \ [7] Causal discovery with score matching on additive models with arbitrary noise \ [8] Interventional causal representation learning --- Rebuttal Comment 1.1: Comment: Thanks for your reponse. I main my score. --- Reply to Comment 1.1.1: Title: Response Comment: Thank you again for the suggestion and the discussion!
Summary: This paper investigates causal disentanglement, learning latent causal factors from observational data without interventions. It identifies latent factors in nonlinear causal models with additive Gaussian noise and linear mixing, showing that causal variables can be identified up to a layer-wise transformation. The authors propose an algorithm based on quadratic programming over score estimation and validate it with simulations, demonstrating meaningful causal representations from observational data. Strengths: 1. The paper presents a method to identify causal variables from observational data without interventions. 2. It also provides the theoretical analysis, and demonstrates that latent variables can be identified up to a layer-wise transformation consistent with the underlying causal ordering, with no further disentanglement possible. Weaknesses: 1. The method proposed in this paper relaxes some assumptions, making it potentially more applicable to real-world scenarios. It would be better to provide the performance of applying the proposed method to real-world data. 2. In line 475, the denominator of the first term after the third equals sign in the equation should be differentiated with respect to $z_l$ instead of $k_l$? . Technical Quality: 2 Clarity: 2 Questions for Authors: See the weaknesses above. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the strength of our theoretical analysis and our proposed method! We would like to address your comments below: > **“The method proposed in this paper relaxes some assumptions, making it potentially more applicable to real-world scenarios. It would be better to provide the performance of applying the proposed method to real-world data.”** We thank the reviewer for this suggestion. Similar to [1,2,3], we view this paper as having a primarily theoretical contribution, where our experiments on synthetic data provide a proof-of-concept for our main results. While we acknowledge the importance of real-world experiments, a major challenge lies in the fact that ground truth latent variables are not specifically labeled in many real-world datasets, making it difficult to evaluate the precise accuracy of estimated latent variables. Similar constraints are present in many previous works in causal disentanglement, where evaluations are generally based on synthetic data. However, we do believe there are many real-world scenarios, such as topic modeling in natural language for example, that could be interesting settings to evaluate our methods. In particular, the method can be used to identify hierarchical topics at different layers of the underlying causal structure. Another example is learning a latent genealogical tree, where each layer representation would contain all of the prior ancestral information used to determine a given generation's traits. Such experiments would however necessitate an extensive amount of additional thought in experimental set up, which we believe is out of the scope of our current work, which mainly focuses on theoretical guarantees. We however recognize the importance of real-world applications and will add these examples to the introduction to build intuition of how our work might be useful beyond the theoretical guarantees. > **In line 475, the denominator of the first term after the third equals sign in the equation should be differentiated with respect to $z_l$ instead of $k_l$?** Thank you for pointing this out. This term should be differentiated with respect to $z_l$ instead of $k_l$. We will edit accordingly. --- References: [1] General identifiability and achievability for causal representation learning \ [2] Learning latent causal graphs via mixture oracles \ [3] Learning linear causal representations from Interventions under general nonlinear mixing --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I’m happy to increase my evaluation score. --- Reply to Comment 1.1.1: Comment: Thank you for the discussion and for updating the score! We are grateful to the suggestions.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FedAvP: Augment Local Data via Shared Policy in Federated Learning
Accept (poster)
Summary: This paper proposes FedAvP, a novel algorithm to perform data augmentation via shared policy in federated learning (FL). Extensive experiments verify the effectiveness of the proposed algorithm. Strengths: S1: The proposed algorithm is novel with solid theoretical analysis. S2: The experiments are comprehensive, with open-source code submitted. Weaknesses: W1: The authors can conduct more experiments on more comprehensive non-IID settings. For example, in a previous FL benchmark on non-IID data [1], the quantity-based label skew settings are challenging. The authors are suggested to experiment on such settings to further support the effectiveness of the proposed algorithm. [1] Li, Qinbin, et al. "Federated learning on non-iid data silos: An experimental study." 2022 IEEE 38th international conference on data engineering (ICDE). IEEE, 2022. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see weaknesses. The authors are suggested to conduct more comprehensive experiments. I will adjust the score based on the author response. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. More experiments on FL benchmark on non-IID data** We conduct experiments on comprehensive non-IID settings suggested by the reviewer, specifically the quantity-based label skew settings described in [1]. The table below presents the results across different datasets and partitioning strategies. Here, C is the number of different labels held by each client. In extreme label skew cases, such as C = 1, where data labels are highly partitioned, our algorithm shows slightly lower performance on CIFAR-100 (C = 1). However, in all other cases, our algorithm demonstrates improved performance. |Niid-bench(dataset/C)[1]|CIFAR100(C=3)|CIFAR100(C=2)|CIFAR100(C=1)|SVHN(C=3)|SVHN(C=2)|SVHN(C=1)| |-|-|-|-|-|-|-| |FedAvg+Default|27.75|24.55|**7.59**|89.5|85.34|8.45| |FedAvg+RandAugment|25.38|22.94|6.69|85.75|79.05|7.64| |FedAvg+TrivialAugment|24.36|19.58|4.84|85.35|77.99|7.64| |FedProx+Default|27.1|24.26|7.51|89.18|85.87|9.39| |FedProx+RandAugment|26.1|24.46|5.66|86.19|80.11|7.63| |FedProx+TrivialAugment|24.15|20.14|3.17|84.73|78.56|8.34| |FedDyn+Default|27.84|24.89|7.39|89.59|86.65|14.56| |FedDyn+RandAugment|25.8|23.34|1.57|83.64|80.06|9.52| |FedDyn+TrivialAugment|24.5|19.7|3.81|84.34|79.06|9.4| |FedFA+Default|27.51|23.23|6.83|89.91|82.94|11.6| |FedFA+RandAugment|21.58|25.13|3.09|87.97|59.52|11.6| |FedFA+TrivialAugment|23.33|20.07|5.58|87.05|68.69|11.87| |Scaffold+Default|29.75|20.09|1.18|90.07|85.02|9.39| |Scaffold+RandAugment|26.56|17.06|1.18|82.57|6.52|14.45| |Scaffold+TrivialAugment|19.21|12.02|1.17|79.17|6.53|7.64| |FedAvP(local)|27.74|24.35|5.38|91.74|88.74|11.6| |FedAvP(Fast Update)|**31.54**|**30.96**|6.17|**92.53**|**90.13**|**18.92**| **Q1.** This question is related to W1. We conducted experiments in non-IID settings as suggested by the reviewer. We kindly ask you to refer to our response to W1. Additionally, please refer to our responses to Reviewer zQTG W1 and Reviewer WeMd W2, which include comparisons with more non-IID settings. [1] Li, Qinbin, et al. "Federated learning on non-iid data silos: An experimental study." 2022 IEEE 38th international conference on data engineering (ICDE). IEEE, 2022. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I appreciate the authors' efforts on additional experiments. Given the rebuttal and other reviews, I decide to raise the rating.
Summary: This paper points out that the shared input-level and feature-level information poses potential privacy leakage. FedAvP only shares the augmentation policy which is not related to the data. This method leverages the first-order information as a replacement to reduce privacy leakage and communication costs. Comprehensive experiments demonstrate the efficacy and efficiency of FedAvp. Strengths: - Empirical evidence shows that the proposed method is effective in enhancing model performance. - This method avoids the privacy leakage of the information-sharing strategy. - This method can be deployed on different methods. Weaknesses: - The writing of this paper should be improved to express ideas clearly. - A deeper analysis is needed to reveal why different $\alpha$ learn different augmentation methods under the same data as depicted in Figure 1. There is also no in-depth insight about how the different augmentation influences the performance in different FL setting Technical Quality: 2 Clarity: 2 Questions for Authors: - As FedAvp is also a learning policy for augmentation strategy. So whats the performance of FedAvg+AutoAugment which is also an automation policy? - What is the meaning of OOD client is not mentioned. - This paper claims that the reported accuracy is calculated as the weighted average of each client’s accuracy by the number of data points they have. However, most of the methods report the accuracy of global model. How the test dataset is partitioned of each client? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Improvement to the explanation** Thank you for your valuable suggestion to improve our paper. In response, we will provide more detailed explanations to address your points, including the effect of different heterogeneity levels $\alpha$, an additional baseline, the meaning of OOD (Out-of-distribution), and the weighted average metric. **W2. Why does different alpha learn different augmentations?** The alpha controls the degree of data heterogeneity among clients. As the value of alpha decreases, the degree of heterogeneity increases. As the heterogeneity increases, the distribution of labels among the clients becomes more uneven, resulting in the model in each client trained from vastly different distributions even when the same dataset is used. This explains why the optimal augmentations learned by our algorithm vary with the changes in alpha. We will include figures of the dataset distributions in the appendix to further aid readers' understanding. **Q1. The performance of FedAvg + AutoAugment** The AutoAugment paper proposes a method where multiple child networks based on a controller RNN are trained through reinforcement learning. This approach was not developed with FL in consideration, where data privacy is essential. Therefore, applying AutoAugment directly to FL is challenging without further development. The original AutoAugment also requires significant computational resources, as the paper reports 5000 GPU hours for CIFAR-10 and 100 GPU hours for SVHN, making it impractical for FL environments regarding computation time and communication costs. To facilitate comparison, we applied the “pre-trained” augmentation policies reported in AutoAugment to the FedAvg algorithm. We used the CIFAR-10 and SVHN AutoAugment policies from the paper and compared the results. For CIFAR-100, we used the augmentation policy trained on CIFAR-10, as done in AutoAugment. The “Default” augmentation refers to using Random Crop and Horizontal Flip. As shown in the following table, our algorithm FedAVP shows superior performance across various heterogeneity settings for CIFAR-100, CIFAR-10, and SVHN. | Main Table | CIFAR100/5.0 | CIFAR100/0.1 | CIFAR10/5.0 | CIFAR10/0.1 | SVHN/5.0 | SVHN/0.1 | |--------------------------|--------------|--------------|-------------|-------------|----------|----------| | FedAvg + Default | 40.06 | 37.34 | 79.76 | 72.6 | 92.78 | 85.58 | | FedAvg + AutoAugment | 47.74 | 43.45 | 81.92 | 72.12 | 92.65 | 87.02 | | FedAvP (Fast Update) | 49.97 | 45.08 | 83.55 | **77.2** | **95.14**| 87.86 | | FedAvP | **50.47** | **45.96** | **83.78** | 77.1 | 95.02 | **89.81**| **Q2. The meaning of OOD client** The OOD client stands for the out-of-distribution (OOD) client, which refers to the one that does not participate in training but is used to evaluate the global model’s performance during the test. Evaluating OOD clients allows us to determine how well the FL model generalizes to new clients. Even though we briefly mentioned the OOD client in Section 4.1, the actual comparison and experimental results are exhibited in Appendix A; we will clarify this. As shown in Table 5, our approach also outperforms the experiment with the OOD clients, confirming superior generalization capacity. **Q3. The weighted average of each client’s accuracy** We followed the weighted accuracy metric according to the pFL-bench paper [1]. Based on each client's data size, the weighted average metric ensures fair representation and robustness to data imbalances, providing a more accurate overall performance evaluation. It reflects the real-world impact of larger datasets and mitigates the noise from smaller datasets, leading to a more stable assessment in FL. The label distribution of each client's test dataset is the same as the label distribution of their own training dataset. For example, in the case of the CIFAR-10 dataset, if the number of data samples per label in Client A's training dataset is [100, 30, 50, 70, 10, 80, 20, 90, 40, 60], then the number of data samples per label in Client A's test dataset will be [20, 6, 10, 14, 2, 16, 4, 18, 8, 12]. Additionally, the distribution of the number of test data samples among clients is the same as the distribution of the number of training data samples among clients. Therefore, the weighted average of the global model's accuracy on each client's test dataset by the number of data samples they have is equal to the global model's accuracy on the entire test dataset. [1] Chen, Daoyuan, et al. "PFL-bench: A comprehensive benchmark for personalized federated learning." NeurIPS, 2022. [2] Pascanu, Razvan, et al. "On the difficulty of training recurrent neural networks." ICML, 2013. [3] Mikolov, Tomas, et al. "Empirical evaluation and combination of advanced language modeling techniques." Interspeech, 2011. [4] Shu, Jun, et al. "Meta-weight-net: Learning an explicit mapping for sample weighting." NeurIPS, 2019. [5] Zhou, Fengwei, et al. "Metaaugment: Sample-aware data augmentation policy learning." AAAI, 2021. --- Rebuttal Comment 1.1: Comment: Thanks to the efforts of the authors. However, I am still concerned about the insight of this paper. e.g., why FedAvP strategies tend to learn one strategy and not the other when heterogeneity is severe? This may provide new insight into addressing heterogeneity in FL. At the same time, there is a lot of work to test the global model accuracy, and the metric for this task is also needed. I hope this paper can inspire a new view in FL to solve the heterogeneity. In summary, I will continue to maintain my grade --- Reply to Comment 1.1.1: Comment: Dear Reviewer. We sincerely appreciate your thoughtful advice. However, FedAvP does not learn just one strategy. As shown in Figure 2, it clearly learns different data augmentation strategies depending on the heterogeneity levels and distributions. As described in Section 4.3, the global policies are then adapted into a local policy using each client's local data. Figure 3 also demonstrates the statistics of personalized policies among different clients. As you suggest, we also evaluated the accuracy of the global model on CIFAR-100 ($\alpha = 5.0$ and $\alpha = 0.1$) using equally-weighted metric. We will include the results in the appendix. In this experiment, FedAvP still outperforms other baselines. |Dataset/heterogeneity degree α|CIFAR100/5.0|CIFAR100/0.1| |------------------------------|------------|------------| |FedAvg+Default|40.04|36.98| |FedAvg+RandAugment|47.3|43.17| |FedAvg+TrivialAugment|46.61| 42.04| |FedProx+Default|40.56|37.61| |FedProx+RandAugment|45.95|41.25| |FedProx+TrivialAugment|46.59|41.67| |FedDyn+Default|42.11|38.23| |FedDyn+RandAugment|45.68|42.08| |FedDyn+TrivialAugment|46.84|40.92| |FedExp+Default|42.78|38.22| |FedExp+RandAugment|46.14|41.97| |FedExp+TrivialAugment|48.54|42.01| |FedGen+Default|42.12|38.05| |FedGen+RandAugment|47.11|42.96| |FedGen+TrivialAugment|47.73|40.62| |FedMix+Default|39.59|38.46| |FedMix+RandAugment|46.67|42.7| |FedMix+TrivialAugment|46.62|42.49| |FedFA+Default|43.68|41.18| |FedFA+RandAugment|48.87|43.26| |FedFA+TrivialAugment|47.86|43.36| |FedAvP(local)|49.05|43.64| |FedAvP(Fast Update)|49.94|45.09| |FedAvP|**50.59**|**45.93**| We respectfully hope that the reviewer re-examines the evaluation of our work. Our approach to facilitating shared data augmentation policy is a novel direction in federated learning research. We have addressed data scarcity and heterogeneity, preserving security, and also provided a theoretical analysis that elucidates the role of meta-policy updates in distributed learning.
Summary: The paper introduces a novel federated data augmentation algorithm, FedAvP, designed for data augmentation of the client-side without the need to share client data information. Specifically, the authors propose a meta-learning method that allows multiple clients to collaboratively learn data augmentation policies and design a Federated Meta-Policy Loss (FMPL) for the optimization of data augmentation policies. Moreover, to prevent the gradient updates of the data augmentation policies from leaking local data privacy, they propose using first-order approximation optimization to protect privacy and reduce communication costs. Finally, the authors conducted experiments in different data heterogeneity scenarios across multiple datasets to validate the effectiveness of the method. Strengths: 1) The paper innovatively applies the method of automatic data augmentation using policy loss to the federated learning scenario, to mitigate the data heterogeneity issues faced by federated learning. 2) The paper provides theoretical proof for the proposed method, and validates its effectiveness through performance experiments, privacy protection experiments, and cost experiments. 3) The paper is logically coherent and well-organized. 4) The method of multiple clients collaboratively learning automatic data augmentation policies may provide new research ideas for dealing with data heterogeneity issues in federated learning. Weaknesses: 1) The augmentation policy search method in the paper only supports image data augmentation and includes only two types of augmentation methods. Moreover, as the number of augmentation methods increases, the search space will grow exponentially. The authors should analyze the scalability of FedAvP in this regard. 2) The paper lacks comparisons with the latest federated data augmentation methods, and there are few non-iid scenarios involved in the performance experiments, with only one scenario on some datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why were the performance experiments on the Cifar-10, SVHN, and FEMNIST datasets only conducted in one scenario? 2. The gradient calculation of the policy loss introduces significant additional overhead. Could this be a hindrance to applying this method to larger visual pre-training models like ViT? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The method proposed in the paper seems to be applicable only for federated training of small-scale visual models. Its performance on tasks involving text, speech, or multi-modalities and larger-scale models still needs to be further validated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. The scalability of FedAvP** When using two operations, the neural network output $P_{\theta}$ in the paper has a 17x17 dimension, represented as $P_{\theta}(1), ..., P_{\theta}(17 \times 17)$. From this output, we sample $P_{\theta}^{1}, ..., P_{\theta}^{B}$ based on the batch size $B$ and apply the augmentation. Since the augmentation sampling is performed from the output $P_{\theta}$ obtained through a single forward pass, even if the search space expands to 17x17x17 for three operations, the computational load of FedAvP does not increase exponentially due to only requiring one forward pass of the neural network. To assess whether our algorithm can effectively search within this expanded search space, we conducted the following experiment. We extend the layers in FedAvP to experiment with 17x17x17 possible combinations of three operations on the SVHN dataset under different heterogeneity conditions, with $\alpha = 5.0$ and $\alpha = 0.1$. The results, recorded as test accuracy per training round, are presented below. |SVHN / α=0.1|Test(%) at Round 100|Test(%) at Round 300|Test(%) at Round 500| |-|-|-|-| |FedAvP (Fast Update) / 2 layers|86.85|87.84|87.86| |FedAvP (Fast Update) / 3 layers|84.04|89.72|92.07| |SVHN / α = 5.0|Test(%) at Round 100|Test(%) at Round 300|Test(%) at Round 500| |-|-|-|-| |FedAvP (Fast Update) / 2 layers|92.76|94.67|95.14| |FedAvP (Fast Update) / 3 layers|92.73|94.44|95.01| Table: Test accuracies with different search spaces on the SVHN dataset. As shown in the table, despite the 17-fold increase in the search space when using three layers on the SVHN/0.1 dataset, FedAvP’s performance significantly improved compared to the two-layer setup reported in the main table of the paper. This demonstrates that FedAvP can effectively search the expanded search space. On the SVHN/5.0 dataset, the performance remained consistent with that of the two-layer setup. We observed that even with a 17-fold increase in the search space due to the use of three layers, FedAvP effectively searches for policies that either improve performance or maintain consistent results, depending on the data distribution. **W2. More experimental results of non-IID scenarios** We conducted additional experiments on CIFAR-10/0.1 and SVHN/5.0, which were not included in the previous paper version. The alpha in FEMNIST is not further explored due to the absence of an adjustable heterogeneity parameter. The results are provided in the official comment. **Q1.** This question is related to W2. We conducted additional experiments in non-IID environments. For a detailed explanation, we kindly ask you to refer to our response to W2. Additionally, please refer to our response to Reviewer 866Z W1, which addresses quantity-based label skew settings[7], where labels are highly partitioned. **Q2. The application of our methods to larger visual model like ViT** We reported the performance of a larger model, specifically the VGG11-s model with approximately 3x more parameters in Appendix A.2. To further investigate, we conduct experiments using the ViT-T model[1,2] on the CIFAR100/5.0 and CIFAR100/0.1, and the results are presented below. |ViT-T model|CIFAR100/5.0|CIFAR100/0.1 | |-|-|-| | FedAvg + Default |31.75|30.71| | FedAvg + RandAugment |42.39|41.7 | | FedAvg + TrivialAugment|41.58|33.75| | FedExp + Default |37.33|35.77| | FedExp + RandAugment |46.36|45.5 | | FedExp + TrivialAugment|44.37|40.08| | FedAvP (Fast Update) |**51.1**|**47.85**| In the table, "Default" refers to the default augmentation, which applies Random Crop and Horizontal Flip. Both the FedAvg and FedExp algorithms were tested with default augmentation, as well as with RandAugment and TrivialAugment. FedAvP (Fast Update) was trained using the Fast Update method described in Section 3.3 of the paper. Our algorithm demonstrates improved performance over the CNN-based models reported in the original paper. For baseline algorithms, we observe overfitting due to the increased size of model parameters. We also compare the computation time on the client side needed to reach the target accuracy, similar to the methods reported in the paper. The results are as follows. | Computation Time of the ViT-T on CIFAR-100/5.0 | Rounds(30%) | Time(30%) | |-|-|-| | FedAvg + Default |400|3.08 hours| | FedAvg + RandAugment |300|2.18 hours| | FedAvg + TrivialAugment|400|2.63 hours| | FedAvP (Fast Update) |220|3.23 hours| | Computation Time of the ViT-T on CIFAR-100/0.1 | Rounds(25%) | Time(25%) | |-|-|-| | FedAvg + Default |300|4.33 hours| | FedAvg + RandAugment |400|6.02 hours| | FedAvg + TrivialAugment|480|8.27 hours| | FedAvP (Fast Update) |260|3.73 hours| In the first CIFAR-100/5.0 table, we recorded the number of rounds and computation time required for the global model to reach a target accuracy of 30% when trained with ViT-T model. In the CIFAR-100/0.1 table, we recorded the rounds and computation time needed to reach a target accuracy of 25%. We compared the FedAvP (Fast Update) to the FedAvg with each of the Default Augmentation, RandAugment, and TrivialAugment applied, as explained above. Compared to the FedAvg, FedAvP (Fast Update) takes slightly longer on the CIFAR-100/5.0 dataset. However, on the CIFAR-100/0.1 dataset, FedAvP (Fast Update) actually achieves faster learning with the ViT-T model. Overall, applying our algorithm to the ViT-T model does not result in a significant increase in computation time. [1] Dosovitskiy, Alexey, et al. "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale." ICLR, 2021. [2] Qu, Liangqiong, et al. "Rethinking architecture design for tackling data heterogeneity in federated learning." CVPR, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. Most of my concerns have been well addressed.
Summary: This paper proposes FedAvP, which performs data augmentation search by sharing policies among clients in a federated learning (FL) environment. They introduce federated meta-policy loss and utilize the first-order gradient information to further enhance privacy and reduce communication costs. The proposed algorithm allows for rapid adaptation of a personalized policy by each client, relieving the challenge of non-iid. Strengths: 1. This paper proposes a novel federated data augmentation algorithm that shares only the augmentation policies during training. It allows to augment the data without revealing data privacy. 2. Although FedAvP will introduce some additional computation and communication costs, they introduce various techniques to overcome like first-order approximation and fast update. 3. The experiments are well done, and there were experimental data for all possible judging criteria. My concern is that the number of datasets is somewhat limited. 4. This paper is logically clear, and describes the possible problems and solutions one by one, which is easy to understand. Weaknesses: 1. If the focus of this paper is on addressing non-iid using data augmentation, then the baselines used for comparisons are actually lacking. Classic methods for addressing non-iid, such as FedNova and Scaffold, have not been compared. 2. Additionally, to address the non-iid problem, it would be helpful to have some more vivid and intuitive explanations of why the shared policy can alleviate the non-iid issue. 3. One weakness affecting readability is that when the article references some inspired works to address certain issues, it opts for direct citations without more detailed explanations, such as gradient clipping and the reweighting in Eq. (3). Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Regarding W2, could you please add more intuitive explanations of the shared policy strategy? 2. Regarding W3, could you please explain what is the advantage of the reweighting in Eq.(3)? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors provide discussions on the limitations of the work in Sections 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Comparison with other classic non-IID methods, such as FedNova and Scaffold.** In Table 1 of the manuscript, we compare our model with baselines, including state-of-the-art federated learning and federated data augmentation algorithms. As the review suggested, we conducted an additional experiment with non-IID algorithms, including FedNova [1] and Scaffold [2]. |Dataset / heterogeneity degree α|CIFAR100/0.1|CIFAR10/0.1|SVHN/0.1|FEMNIST| |-|-|-|-|-| |FedNova + Default |38.52|74.45|88.16|81.21| |FedNova + RandAugment |42.43|74.08|84.42|79.79| |FedNova + TrivialAugment |40.23|71.99|82.96|78.92| |Scaffold + Default |44.94|75.67|87.26|83.17| |Scaffold + RandAugment |43.57|72.4|77.07|79.31| |Scaffold + TrivialAugment|42.14|64.12|14.70|78.06| |FedAvP(Fast Update) |45.08|**77.2**|87.86|**84.47**| |FedAvP | **45.96**|77.1|**89.81**|84.27| "Default" refers to the default augmentation, which applies Random Crop and Horizontal Flip. Both the FedNova and Scaffold algorithms were tested with default augmentation, as well as with RandAugment and TrivialAugment. FedAvP (Fast Update) was trained using the Fast Update method described in Section 3.3 of the paper. We examined the performance under diverse heterogeneity conditions on highly non-IID environments using CIFAR100, CIFAR10, SVHN, and FEMNIST datasets. Our FedAvP method outperformed across all tested scenarios. **W2. Why the shared policy can alleviate the non-iid?** When the client's data distribution follows non-i.i.d. in federated learning, local models are trained on the different data distributions, which can lead to poor global model aggregation due to distribution shifts. However, our shared data-augmentation policy and meta-policy search strategy can generate additional data samples that help balance the data across clients, thereby reducing the disparity and scarcity in data distributions and improving model convergence and accuracy. As shown in Figure 1(a), under the 'local policy update,' our algorithm updates the augmentation policy so that the aggregated model performs well across the various local data distributions (non-iid, pre-augmentation). **W3. The detailed explanations of gradient clipping and reweighting** We sincerely thank the reviewer for the suggestion regarding the improvement of the readability of our paper. Gradient clipping [3,4] is performed using the method $g \leftarrow \frac{c}{||g||}$ when $||g|| > c$, and it is applied to the gradients of $D_{k}^{val}$ and $D_{k,n-1}^{train}$ in Eq. (6). Reweighting [5,6] is performed using the sigmoid output of the neural network $P_{\theta}$ in Eq. (3). Specifically, when the search space is a 17x17-dimensional neural network output $P_{\theta}$, each dimension $P_{\theta}(1), ..., P_{\theta}(17 \times 17)$ corresponds to a two-operations augmentation, where a higher value indicates a higher probability of selecting that augmentation. During training, when there is a batch size of $B$, $B$ samples are drawn from a multinomial distribution based on the value of $P_{\theta}$, resulting in $P_{\theta}^1,..., P_{\theta}^B$. Here, each $P_{\theta}^i$ corresponds to one of the 17x17 neural network output dimensions. For each data sample, two-operations augmentation is applied based on $P_{\theta}^i$, and the weight of the augmented data sample is determined by the corresponding $P_{\theta}^i$. We will include this explanation in the Appendix. **Q1.** Please refer to our response to W2. **Q2. The advantage of the reweighting in Eq.(3).** We applied the corresponding two-operations to each data sample in the batch using the 17x17-dimensional neural network output, where the weight of the augmented sample was determined by $P_{\theta}^i$. After training the local model, our policy loss tries to improve the performance on the validation data. In this process, two-operations that hinder the performance on the validation data are updated through backpropagation using the weight $P_{\theta}^i$, causing the weight $P_{\theta}^i$ to decrease. Our Federated Meta-policy (FMPL) approach updates a shared data augmentation policy for each client’s unique environment to mitigate data heterogeneity. The sampling operation is non-differentiable in the typical data-augmentation approach. However, the reweighting approach in Eq.(3) enables differentiation, which is a key advantage. [1] Wang, Jianyu, et al. “Tackling the objective inconsistency problem in heterogeneous federated optimization.” NeurIPS, 2020. [2] Karimireddy, Sai Praneeth, et al. “Scaffold: Stochastic controlled averaging for federated learning.” ICML, 2020. [3] Pascanu, Razvan, et al. “On the difficulty of training recurrent neural networks.” ICML, 2013. [4] Mikolov, Tomas, et al. “Empirical evaluation and combination of advanced language modeling techniques.” Interspeech, 2011. [5] Shu, Jun, et al. “Meta-weight-net: Learning an explicit mapping for sample weighting.” NeurIPS, 2019. [6] Zhou, Fengwei, et al. “Metaaugment: Sample-aware data augmentation policy learning.” AAAI, 2021. [7] Li, Qinbin, et al. “Federated learning on non-iid data silos: An experimental study.” ICDE. IEEE, 2022. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the responses. Most of my concerns have been addressed.
Rebuttal 1: Rebuttal: **More experimental results** We conducted additional experiments to answer Reviewer WeMd's W2 and Reviewer 866Z's W1, considering more non-IID settings and highly partitioned label skew settings [1], respectively. |Dataset/heterogeneity degree α|CIFAR100/5.0|CIFAR100/0.1|CIFAR10/5.0|CIFAR10/0.1|SVHN/5.0|SVHN/0.1|FEMNIST| |------------------------------|------------|------------|-----------|-----------|--------|--------|-------| |FedAvg+Default|40.06|37.34|79.76|72.6|92.78|85.58|80.65| |FedAvg+RandAugment|47.29|43.6|82.82|73.73|92.48|84.84|79.4| |FedAvg+TrivialAugment|46.61|42.16|82|71.09|91.99|83.36|79.01| |FedProx+Default|40.57|37.71|80.64|73.23|93.15|86.79|81.45| |FedProx+RandAugment|45.97|41.39|82.56|73.71|92.33|85.52|77.11| |FedProx+TrivialAugment|46.61|41.81|81.83|70.89|91.67|84.11|79.67| |FedDyn+Default|42.09|38.52|80.36|73.86|93.16|87.6|80.47| |FedDyn+RandAugment|45.7|42.24|82.51|72.78|92.16|81.47|77.64| |FedDyn+TrivialAugment|46.83|41.1|82.03|70.34|92.22|83.41|79.31| |FedExp+Default|42.76|38.28|80.64|73.7|92.77|86.66|81.45| |FedExp+RandAugment|46.13|42.23|82.86|70.78|92.12|84.63|79.69| |FedExp+TrivialAugment|48.55|42.09|82.51|71.07|92.64|83.72|80.2| |FedGen+Default|42.14|38.27|80.23|72.74|92.71|86.79|81.86| |FedGen+RandAugment|47.11|43.1|81.9|73.42|91.84|84.39|79.34| |FedGen+TrivialAugment|47.71|40.76|82.58|70.87|91.73|83.23|77.35| |FedMix+Default|40.26|38.69|80.99|74.54|92.8|86.02|81.63| |FedMix+RandAugment|46.69|43|83.08|74.25|92.36|83.44|79.46| |FedMix+TrivialAugment|46.64|42.63|81.83|71.5|91.85|82.34|77.84| |FedFA+Default|43.7|41.21|82.61|76.02|92.77|87.33|81.13| |FedFA+RandAugment|48.86|43.44|82.44|73.53|91.21|81.32|78.71| |FedFA+TrivialAugment|47.86|43.45|80.12|72.89|91.89|78.62|78.96| |FedAvP(local)|49.04|43.86|83.64|73.43|94.71|87.05|83.94| |FedAvP(Fast Update)|49.97|45.08|83.55|**77.2**|**95.14**|87.86|**84.47**| |FedAvP|**50.47**|**45.96**|**83.78**|77.1|95.02|**89.81**|84.27| Our algorithm achieves the highest performance across all scenarios. [1] Li, Qinbin, et al. “Federated learning on non-iid data silos: An experimental study.” ICDE. IEEE, 2022.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks?
Accept (poster)
Summary: The present work pertains to analysing and improving Graph Neural Networks operating on geometric graphs that possess E(3) (Euclidean) symmetries, i.e. the tasks of interest are invariant/equivariant to rotations, translations and reflections. The paper challenges the assumption that it is sufficient to use E(3)-equivariant features of 1st degree (the term “degree” is related to the dimensionality of the group representation to which the features are equivariant - 1st-degree features are 3-dimensional and are equivariant to 3D rotations/reflections), as done in EGNN, Satorras et al., ICML’21, which works quite well in practice and is computationally efficient. The authors construct a series of counterexample symmetric geometric graphs (i.e. graphs which, when rotated by certain angles, remain intact up to permutations of their vertices) which are shown to be challenging for geometric GNNs. It is shown that, for certain degrees $l$, a GNN can only produce output features equal to zero, therefore showing that the degree is crucial for expressivity and that $l=1$ is insufficient for some cases. To address this limitation, they combine ideas from EGNN and TFN, Thomas et al., arxiv’18: They create a GNN, similar to EGNN, enriched with auxiliary features of higher degree, using spherical harmonics (similar to TFN). To keep the computational complexity low, they produce the messages used for the updates of node features/positions/auxiliary features via scalarisation using inner products, simplifying the messages used in TFN. The expressivity of the method is theoretically analysed, showing the ability to recover angles, and empirically tested, showing improved performance in several synthetic and real-world benchmarks. Strengths: **Significance**. The studied problem is important for various physical sciences using data-driven methods and finding better trade-offs between performance and computational complexity is key to their progress. The present paper improves this trade-off both theoretically and empirically. **Novelty**. - *Theoretical results*. The underlying theory illustrating the expressivity barriers of low-degree representations (theorems 3.4-3.6) is simple and concise, yet insightful and of independent technical interest (can be used for other types of symmetric objects to find out properties of neural representations and illustrate limitations of modelling choices). Additionally, theorem 4.1. backs the proposed method with theoretical arguments, hinting that is probably a good trade-off between expressivity and computational complexity (although the former is not completely characterised). - *Method*. As far as I know, the proposed approach to incorporate higher-degree representations into message-passing is original. Additionally, it is easy to understand and implement, which may facilitate reproducibility. **Presentation**. Apart from a few exceptions (see weaknesses), the paper is well-explained, with carefully chosen notation and progressive illustration of the different results (from the theoretical motivation to the model improvement and empirical evaluation). **Empirical results & Execution**. The experimental outcomes are in agreement with the theory on synthetic data, while the method is shown to generalise better than its competitors in real-world data, with additional computational efficiency advantages. Overall, this work is well-executed: starting from a characterisation of the limitations of existing works, then proposing a remedy (that gets the best of both worlds from two different families of methods) and finally testing it both on the counterexamples, as well as in real-world scenarios, validating the theory. Weaknesses: **Clarity: limits audience and creates some issues with contextualization to related work**. Although not a major weakness, there are some issues with the clarity of the presentation, which I think should be addressed to improve the exposition of the paper to a wider audience. - In the introduction and the related work section, the authors mention several concepts without providing explanations which might make the text hard to follow, especially for readers not experts in geometric GNNs and Euclidean symmetries. - For instance: *x-degree steerable representations* - this is a notion at the heart of the paper, so it should probably be intuitively explained early on, e.g. by summarising the formal definitions at the end of section 3.1. - *Clebsch-Gordan tensor product* - this concept is not explained in the paper. However, I do not think that the authors should assume familiarity with it. The reader needs to obtain a clear understanding due to the following: the main methodological innovation of the present work is to replace this operation with inner products to improve computational efficiency. I think that the authors should explain in a separate subsection the differences between these two operators and discuss what they sacrifice by not incorporating CG products in their architecture. - Examples 3.2. and 3.3. need further elaboration as they are the main motivation behind the proposed methodology. E.g. why do even folds are not symmetric w.r.t. the cyclic group? Perhaps explain what the dihedral group is. Why are regular polygons symmetric w.r.t. the dihedral group? Technical Quality: 4 Clarity: 3 Questions for Authors: - Section 3.1/Section 4: It is unclear to me why the authors introduced the modulation of the spherical harmonics (re-scaling). How does this affect expressivity? - Section 3.1: Is the representation of O(3) – Eq. (2) – conventional or introduced here for the first time? In the first case, could the authors provide a reference? In the second case, it would be more appropriate to precisely show that this is indeed a representation for completeness. - As far as I understand, Eq. (3) refers to graph-level features. Perhaps this should be explicitly mentioned to improve readability. - What information is lost from the scalarization trick of Eq. (6) compared to CG tensor products? To what extent does this limit expressivity? - How do the authors explain that in certain cases in Table 2, HEGNN does not achieve 100% performance (although it should according to the theory)? Additionally, why is the std so high (15% and 30%)? - In table 3, it seems that HEGNN with $l\leq 1$ always outperforms EGNN, which is also 1st-degree. How do the authors explain this? - It is unclear to me how equivariance to reflections is handled by this method. How does the discussion at the end of section 3.1 affects the operators used? - Table 5 is quite helpful, perhaps consider moving it to the main text. **Minor**: I have the impression that there were a few typos throughout the text. I suggest performing a proof-reading (examples: L78 typo: university --> universality, L266 type: donated --> denoted). Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: I recommend that the authors devote a separate section to discuss the limitations of their work. Currently, I could locate only a limited discussion in the appendix with regards to not testing in large-scale systems. I do not foresee any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! We provide the following responses to your concerns: > **W1: More detailed background knowledge and conceptual explanations are needed to improve clarity.** Thank you for your valuable suggestions! We will further summarize the formal definitions of steerable representations at the end of Section 3.1, and introduce the concept of CG tensor product in Section 3.1 and compare its difference with Eq.(6) in Section 4. Furhter eblations will be added in Examples 3.2 and 3.3. We sincerely thank you once again, and will further enhance the clarity of our paper by providing more explanations related to geometric GNNs, Euclidean symmetries, and Examples 3.2 and 3.3. > **Q1: The significance of re-scaling on spherical harmonics.** Thanks for your comments! The re-scaling part corresponds to the radial function, which is widely used in previous papers such as TFN. Here, for convenience, we combine the radial function into the formulation of the spherical harmonics to improve the readability of our paper. > **Q2: Origin of O(3) group representation.** Nice suggestion! This representation method is conventional and can be found in many physics-related literature, such as [a,b]. More general and theoretical content of representation theory can be found in some pure mathematical books, such as [c,d]. We will add the necessary citations around Eq.(2) in the revised version. [a] Landau, L. D., and E. M. Lifshitz. Quantum Mechanics: Non-Relativistic Theory. [b] Chen, J. Q., Jialun Ping, and Fan Wang. Group Representation Theory for Physicists. [c] Weyl, Hermann. The Classical Groups: Their Invariants and Representations. [d] Fulton, William, and Joe Harris. Representation Theory. > **Q3: It is helpful to explicitly point out that Eq. (3) refers to graph-level features.** Thank you for raising this point, which is very important for improving the readability of our paper. We will clearly mention this in subsequent revisions. > **Q4: What information is lost in the scalarization trick compared to the CG tensor product?** Thanks for this valuable question! Our HEGNN exclusively passes invariant quantities (inner products of high-degree representations) between features $\tilde v^{(l)}$ of different degrees $l$, unlike the expensive CG tensor product used in TFN, which considers all possible interactions between different frequencies. Our model can be seen as a generalization of the scalarization trick in EGNN to high-degree representations. While the scalarization trick might somehow sacrifice model expressivity in theory, it has shown significantly better efficacy and efficiency in practice compared to conventional high-degree models, as demonstrated by EGNN paper and also our experiments here. Additionally, Theorem 4.1 indicates that passing inner products of full degrees is sufficient to recover the information of all angles between each pair of edges, affirming the theoretical expressivity of our HEGNN in characterizing the geometry of the input structure. As suggested, we will add a new paragraph for the comparision between Eq.(6) and CG tensor products in Section 4. > **Q5: Explanation of the results in Table 2.** Thank you for your comments. The experiment setup in Table 2 is designed from the GWL paper [a]. It requries to first embed the input graphs through an equivariant neural network (e.g. EGNN, HEGNN, TFN), and then classify them through a simple classifier. As for the phenomenon that HEGNN does not achieve 100% in some cases, it is probably because the classifier is not trained well. We observe that accuracy can be improved to 100%, if increasing the number of training epochs to a sufficiently large value (e.g. 2,000). The high std is due to the settings used in the GWL paper. There are only 10 test groups in total, and each group is to classify two graphs into two classes. The classification accuracy can only be 0%, 50%, and 100% for each group, which might exlain why the std will be relatively large. [a] Joshi C K, Bodnar C, Mathis S V, et al. On the expressive power of geometric graph neural networks. > **Q6: The reason for the performance gap between EGNN and HEGNN$\_{l\leq 1}$.** Very insightful comment! We speculate that why HEGNN with $l\leq 1 $ outperforms EGNN is mainly owing to HEGNN initialization (Eq.(5)), which is equivalent to an additional massage passing layer, and increases the depth of the neural network. To show this, we additionally test the effects of EGNN-4layer, HEGNN-3layer, and HEGNN-4layer on the N-body dataset, and the results are as follows. All HEGNNs only used 1-st steerable feature. It can be found that the performance of EGNN-4layer is very close to HEGNN-3layer. **Table S8:** Comparison between EGNN and HEGNN on N-body |N-body ($\times10^{-2}$)|5-body|20-body|50-body|100-body| |-|------|-|-|-| |EGNN-4layer|0.65|1.01|1.00|1.36| |HEGNN$\_{l\leq1}$-3layer|0.63|0.98|0.96|1.31| |HEGNN$\_{l\leq1}$-4layer|0.52|0.79|0.88|1.13| > **Q7: How to achieve reflection equivariance with HEGNN?** Thank you for this valuable question. Since the steerable features are initialized by spherical harmonics in Eq.(5), reflection equivariance is associated with the transformation $(-1)^lD^{{l}}(\mathfrak r)$, where $D^{{l}}(\mathfrak r)$ is the rotation representation. In other words, the sign of the steerable feature changes if $l$ is odd, and keeps unchanged if $l$ is even. Besides, during the message passing in our model, all steerable features only interact with the scalar (i.e. the inner product), without changing their original equivariance. Hence, reflection equivariance is always represented by $(-1)^lD^{{l}}(\mathfrak r)$ throughout our model. > **Q8: It is suggested to move Table. 5 to the main text.** Thank you for your suggestion. We will include it in the main text in the revised version. > **Minor: Some typos.** Thank you very much. We will proof-read our paper again and have the typos fixed. --- Rebuttal Comment 1.1: Title: Post rebuttal Comment: I thank the authors for their responses and the additional experiments provided. I strongly encourage them to update their manuscript in several parts as per the reviewers' suggestions (e.g. extra explanations to improve clarity, clearer comparison with CG tensor product to illustrate their technical contribution, additional experiments suggested by reviewer vhjv, computational complexity/runtime discussion suggested by reviewer op9c). I maintain my initial score and positive evaluation of this paper and recommend acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for keeping your positive recommendation on our work. We will definitely and gladly revise our manuscript according to your inspiring suggestions and those of the other reviewers.
Summary: This paper challenges the prevailing notion that high-degree steerable representations are unnecessary in equivariant Graph Neural Networks (GNNs). The authors provide theoretical analysis showing that equivariant GNNs constrained to 1st-degree representations degenerate to zero functions when applied to symmetric structures like k-fold rotations and regular polyhedra. To address this limitation, they propose HEGNN, a high-degree extension of the EGNN model that incorporates higher-degree steerable vectors while maintaining efficiency through a scalarization technique. The authors evaluate HEGNN on symmetric toy datasets, N-body systems, and MD17 molecular dynamics datasets, demonstrating improved performance over existing models. Strengths: 1. The paper presents a clear and well-motivated research question, challenging an existing assumption in the field of equivariant GNNs. 2. The theoretical analysis is thorough and provides valuable insights into the limitations of low-degree representations for symmetric structures. 3. The proposed HEGNN model offers a promising approach to incorporating high-degree representations while maintaining computational efficiency. 4. The experimental results on symmetric toy datasets align well with the theoretical predictions, providing empirical support for the main claims. 5. The paper includes a good discussion of the trade-offs between expressivity and efficiency in equivariant GNN models. Weaknesses: 1. The experimental comparisons on the N-body system are limited and use different splits and variations compared to existing literature. This makes it difficult to directly compare HEGNN's performance to state-of-the-art methods. Including comparisons with more recent baselines (e.g., ClofNet, GCPNet, SaVeNet) and using standardized benchmarks would strengthen the empirical evaluation. 2. The experiments are primarily focused on predicting future positions of particles/atoms. While this is a relevant task, it may not fully demonstrate the necessity or advantages of high-degree representations across a wider range of equivariant GNN applications. Additional experiments on different tasks or domains could provide more comprehensive evidence to support the paper's claims. 3. Some relevant baselines are missing from the comparisons. For example, GMN-L is not included in the MD17 dataset experiments, despite outperforming the proposed method on several targets. Similarly, GMN is missing from the N-body systems experiment. Including these baselines would provide a more complete picture of HEGNN's performance relative to existing methods. 4. The paper could benefit from a more extensive discussion of the computational trade-offs involved in using high-degree representations. While the authors mention that HEGNN is more efficient than traditional high-degree models, a more detailed analysis of the computational costs and scaling behavior would be valuable. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the authors conduct additional experiments with different tasks or application domains to further support the claims presented in the paper? This could help demonstrate the broader applicability and necessity of high-degree representations in equivariant GNNs. 2. How does HEGNN compare to more recent state-of-the-art methods on standardized N-body system benchmarks, such as those used in the ClofNet paper and subsequent works? 3. Can the authors provide a more detailed analysis of the computational efficiency of HEGNN compared to other high-degree models and EGNN? This could include training times, memory usage, and scaling behavior with respect to the number of particles and degree of representations. 4. Have the authors explored the performance of HEGNN on tasks beyond position prediction, such as force field prediction or other molecular property predictions? This could help strengthen the argument for the necessity of high-degree representations. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors acknowledge that their current experiments are mainly limited to testing on small molecules and have not been verified on large-scale molecules or large-scale physical systems. It remains to be verified whether HEGNN is effective on large-scale geometric graph datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! We provide the following responses to your concerns: > **W1 & Q2: HEGNN should be tested on conventional dataset splits and compared with new baselines such as ClofNet.** Nice suggestion! We have additionally conducted experiments on standard N-body benchmarks (train/valid/test = 3k/2k/2k), and compared our method with more state-of-the-art methods including ClofNet as well as its variant ClofNet-vel [a], MACE [b] and SEGNN [c], thanks to the availability of their open-source code. The results on standard split and our original protocol are reported in Table S5. It is observed that our HEGNN clearly outperforms existing methods in both cases, indicating the general effectiveness of our model. > **W2 & Q1 & Q4: Is it possible to test the performance of HEGNN in different prediction targets and different application fields to further illustrate its versatility?** Thanks for your valuable suggestion. We choose the prediction of future atoms as our task, as it is equivariant, indicating that the input and output spaces share the same coordinate system. With this task, we are capable of evaluating if the output of the model can retain the full geometry including orientation information of the input after multi-layer message passing. We understand that additional experiments on different tasks or domains (e.g. force field prediction or molecular property prediction) are helpful, which, yet, are not the main focus of this paper and are better left for future exploration. > **W3: It is necessary to supplement baselines such as GMN and GMN-L.** Thanks for the reminder. As suggested, we have further included the results of GMN-L in the experiments on MD17 in Table S6, and added the comparisons with GMN in the N-body experiment in Table S5. Our HEGNN-6 generally outperforms GMN, and achieves comparable performance to GMN-L in most cases. Given that GMN-L requires careful handcrafting of constraints for chemical bonds into the model design, our model's ability to derive promising results without such enhancements supports its competitive performance. These results will be added to the revised paper. **Table S5:** Results of N-body dataset under two partitions |N-body($\times10^{-2}$)|5-body|20-body|50-body|100-body| |-|-|-|-|-| |train/valid/test=3k/2k/2k||||| |EGNN|0.71|1.08|1.16|1.29| |ClofNet|0.89|1.79|2.40|2.94| |ClofNet-vel|0.84|1.50|2.28|2.67| |GMN|0.67|1.21|1.18|2.55| |MACE|1.43|1.93|2.20|2.51| |SEGNN|1.81|2.67|3.44|Nan| |HEGNN$\_{l\leq1}$|0.64|**0.84**|0.92|1.04| |HEGNN$\_{l\leq2}$|0.69|0.89|1.13|**0.94**| |HEGNN$\_{l\leq3}$|**0.58**|1.04|**0.92**|1.04| |HEGNN$\_{l\leq6}$|0.77|1.06|1.02|1.18| |train/valid/test=5k/2k/2k||||| |GMN|0.52|0.98|1.04|1.21| |ClofNet|0.80|1.49|2.28|2.77| |ClofNet-vel|0.78|1.45|2.22|2.62| |MACE|1.13|1.60|2.41|3.38| |SEGNN|1.68|2.63|3.30|Nan| |HEGNN$\_{l\leq1}$|0.52|0.79|0.88|1.13| |HEGNN$\_{l\leq2}$|**0.47**|**0.78**|0.90|0.97| |HEGNN$\_{l\leq3}$|0.48|0.80|**0.84**|0.94| |HEGNN$\_{l\leq6}$|0.69|0.86|0.96|**0.86**| **Table S6:** Results on MD-17 with GMN & GMN-L |MD-17|Aspirin|Benzene|Ethanol|Malonaldehyde|Naphthalene|Salicylic|Toluene|Uracil| |-|-|-|-|-|-|-|-|-| |GMN|10.14±0.03|**48.12±0.40**|4.83±0.01|13.11±0.03|0.40±0.01|0.91±0.01|**10.22±0.08**|0.59±0.01| |GMN-L|**9.76±0.11**|54.17±0.69|4.63±0.01|**12.82±0.03**|0.41±0.01|**0.88±0.01**|10.45±0.04|0.59±0.01| |HEGNN$\_{l\leq1}$|10.32±0.58|62.53±7.62|4.63±0.01|12.85±0.01|0.38±0.01|0.90±0.05|10.56±0.10|0.56±0.02| |HEGNN$\_{l\leq2}$|10.04±0.45|61.8±5.92|4.63±0.01|12.85±0.01|0.39±0.01|0.91±0.06|10.56±0.05|0.55±0.01| |HEGNN$\_{l\leq3}$|10.20±0.23|62.82±4.25|4.63±0.01|12.85±0.02|**0.37±0.01**|0.94±0.10|10.55±0.16|**0.52±0.01**| |HEGNN$\_{l\leq6}$|9.94±0.07|59.93±5.21|**4.62±0.01**|12.85±0.01|**0.37±0.02**|**0.88±0.02**|10.56±0.33|0.54±0.01| > **W4 & Q3: Comparison of various indicators at runtime between HEGNN and other high-degree steerable models.** Thanks! The main difference between our HEGNN and traditional high-degree models is that we employ inner products for message exchange between the representations of different degrees, while TFN resorts to CG tensor products to consider all possible interactions between different representations. By denoting the maximum degree as $L$, the complexity of our inner products is equal to $\textstyle\sum_{l=0}^L(2l+1)=(L+1)^2=O(L^2)$, whereas the complexity of CG tensor products is derived as $O(L^6)$ by [a]. The following tables further report the inference times and model sizes of the high-degree models in the 100-body case of the N-body dataset. It is verified that our HEGNN obtains better performance with lower computation cost, compared to TFN, SEGNN, and MACE. **Table S7:** Parameters and inference times of models |**Parameters of Models**|$l\leq1$|$l\leq2$|$l\leq3$|$l\leq6$| |------------------------|-----------|-----------|-----------|-----------| |EGNN|134.1k|--|--|--| |HEGNN|160.3k|160.9k|161.5k|163.2k| |TFN|--|9.6M|19.5M|86.6M| |SEGNN|228.1k|244.1k|254.9k|288.1k| |MACE|14.8M|38.0M|77.0M|342.5M| |**Inference Times ($10^{-2} s$) of Models**|$l\leq1$|$l\leq2$|$l\leq3$|$l\leq6$| |EGNN|0.57|--|--|--| |HEGNN|0.82|0.88|0.91|1.08| |TFN|--|3.75|2.66|OOM| |SEGNN|1.33|1.70|1.98|22.33| |MACE|22.10|125.87|261.11|OOM| [a] Passaro S, Zitnick C L. Reducing SO (3) convolutions to SO (2) for efficient equivariant GNNs.
Summary: This paper studies the necessity of higher-degree features in geometric graph neural networks (i.e. graph neural networks processing data embedded in three-dimensional space), focusing on the ability to recognize data with non-trivial rotational inner-symmetry such as k-folds and regular polygons, and based on the findings, proposes and validates an extension of the E(n)-GNN architecture. Theoretically, the authors first state in Theorem 3.4 that any O(3)-equivariant function on an inner-symmetric graph must produce an output which has an identical inner-symmetry. This immediately puts a restriction on the possible space of outputs an equivariant function can produce, more specifically as the invariant subspace under the action of the inner-symmetry (Theorem 3.5). It follows that certain inner-symmetric inputs and certain choices of output spaces (i.e. degrees $l$) leads to degenerate output space {0} (Theorem 3.6 and Table 1). Consequently, it follows that using a sufficiently high-degree (even) features leads to non-degenerate output spaces of layers, hence be able to encode e.g. orientations of the input. The authors empirically validate the findings on expressive power in Section 5.1, and then demonstrate good performance of the proposed model based on high-degree features and invariant scalarization on n-body for n <= 100 and MD17 datasets. Strengths: - S1. The paper is well-written and easy to follow. - S2. The claims on expressive power regarding symmetric inputs in Section 3.3 is correct, and is original as far as I can confirm; the closest related work is [1], and the authors have clarified the differences of their approach which seems technically correct. This claim is verified by the synthetic experiments in Section 5.1. - S3. The model proposed by the authors based on the theoretical claims is original as far as I am aware, and it seems useful in practice (Section 5.2-5.3), both in terms of performance and efficiency (slightly more costly than EGNN, but still cheaper than other networks involving higher-degree tensors such as TFN; but it is unclear how the approach compares to MACE, see W4 in the Weaknesses section). In particular, the fact that the model performs well on 100-body simulation task is interesting. [1] Joshi et al. On the Expressive Power of Geometric Graph Neural Networks (2024) Weaknesses: - W1. While technically different from [1], this paper conveys a similar message: the use of higher-degree tensors in geometric neural networks is necessary to obtain higher expressive powers which is necessary for certain problems. Given this, one may argue that the GWL hierarchy in [1] is more general (as it handles general non-symmetric inputs as well). Furthermore, inputs with rotational inner-symmetry have been investigated in Section 5.2 of [1] to argue in favor of the higher-degree tensor features, precisely using the task of encoding rotations of the input; while the types of inner-symmetries considered in this work is more general (Table 1), the finding is fundamentally not very different, which can be understood as a weakness of the paper. - W2. It was not entirely clear to me why the capability of encoding rotations of inner-symmetric inputs would be important for practical learning tasks, although the experiments seem to imply so. For example, on n-body systems, I believe it is very unlikely that one would encounter symmetric inputs including the ones given in Table 1, due to symmetry-breaking factors such as numerical imprecision, simulation errors, and noise. I have similar concerns regarding chemical structures (e.g. MD17), as slight perturbations to the atomic coordinates are sufficient to eliminate the inner-symmetries (if they exist) and hence not experience the problems given in Theorem 3.5 and 3.6. This implies a gap between the theoretical arguments on the expressive power given in Section 3, and empirical results given in Sections 5.2 and 5.3. - W3. While Theorem 4.1 says that the proposed architecture can recover the information of all angles between each pair of edges, it does not directly address the original problem given in Theorem 3.5 and 3.6. - W4. I am not sure why MACE [2], while also using higher-degree features and already used for experiments in Tables 1 and 2, is not included for comparison in Tables 3 and 4. [1] Joshi et al. On the Expressive Power of Geometric Graph Neural Networks (2024) [2] Batatia et al. MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields (2023) Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the Weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations in the checklist, but not in the main text. I encourage the reader to move the discussions to a separate section in the main text or Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! We provide the following responses to your concerns: > **W1. Differences and connections with GWL.** Thank you for raising the comparison with the GWL paper [1]. Here, we would like to further highlight the difference between [1] and our paper: 1. Different Motivations: the GWL paper aims to investigate the expressivity of different geometric GNNs from the perspective of WL test, whereas our paper focuses more on exploring the necessity of involving high-degree representations. The GWL test paper has discussed rotational inner-symmetry, but the analyses are only derived experimentally. Upon the formal definition of symmetric graphs, we are able to derive rigorous and theoretical results to explain the expressivity degeneration of equivariant GNNs on symmetric graphs in Table 1. Besides rotational inner-symmetry, we have also investigated regular polyhedra, the derivations of which are not trivial and rely heavily on our derived Theorem 3.6. 2. Different Evaluation Scopes: while the GWL paper only discusses the phenomenon of rotational inner-symmetry degradation in a single section, we thoroughly explore more general kinds of symmetric graphs in our experiments. In addition, to evaluate the practical effectiveness of our proposed model, we conduct experimental comparisons on the N-body and MD-17 tasks. The performances align with our theoretical findings in Theorem 3.6 and Theorem 4.1, verifying the strong expressive power of our model. > **W2. Stability of symmetric structures under perturbations.** Thanks for your question. We agree that it is unlikely to encounter exactly symmetric inputs in practice, but this does not indicate that our theoretical analyses are NOT practically meaningful. Even though symmetry-breaking factors will make the geometric graph deviate from the symmetric state, the deviated graph is still roughly symmetric. In other words, the outputs of equivariant GNNs on the derivated graphs keep close to zero if the degree value is chosen to be those in Table 1 according to Theorem 3.5 and 3.6, which wil still lead to defective performance. To explore this further, we take the tetrahedron as an example and compare the cases of EGNN, HEGNN$\_{l= 3}$, and HEGNN$\_{l\leq 3}$ when adding noise perturbations. **Table S3:** Resluts under perturbations | | $\varepsilon=0.01$ | $\varepsilon=0.05$ | $\varepsilon=0.10$ | $\varepsilon=0.50$ | | ----------------- | ------------------ | ------------------ | ------------------ | ------------------ | | EGNN | 50.0 ± 0.0 | 45.0 ± 15.0 | 65.0 ± 22.9 | 60.0 ± 20.0 | | HEGNN$_{l=3}$ | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | | HEGNN$_{l\leq 3}$ | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | Here, $\varepsilon$ represents the ratio of noise, and the modulus of the noise obeys $\mathcal{N}(0,\varepsilon\cdot\mathbb{E}[||\vec x-\vec x_c||]\cdot I)$. It can be observed that the performance of EGNN is slightly improved in the presence of noise (from $50$% when $\varepsilon=0.01$ to $60$% when $\varepsilon=0.5$), while HEGNN demonstrates better robustness. The symmetry-breaking factors you mentioned are very interesting, and the results above will be included in our revised paper. > **W3. The difference between Theorems 3.5, 3.6, and Theorem 4.1.** We are sorry for any potential confusion. Since our proposed HEGNN has involved high-degree representations, it does directly address the original problem given in Theorem 3.5 (3.6). The purpose of introducing Theorem 4.1 is to further demonstrate the enhanced expressivity of our HEGNN by using the inner products of full degrees. In other words, Theorem 3.5 (3.6), and Theorem 4.1 discuss different aspects of the benefits by our model. > **W4. The performance of MACE on the N-body dataset and the MD-17 dataset should be provided.** Thanks for your reminder. We have additionally tested MACE on the N-body dataset. Due to the expensive running cost, we reduce the channel number of MACE from 64 to 8 (but out of memory still occurs in the 100-body case). Please note that MACE also resorts to CG tensor products similar to TFN, and it is thus consistent that its performance is worse than our model. **Table S4:** Results of MACE and HEGNN on N-body dataset | N-body ($\times 10^{-2}$) | 5-body | 20-body | 50-body | 100-body | | :--------------------------- | :------- | :------- | :------- | :------- | | MACE | 1.13 | 1.60 | 2.41 | 3.38 | | HEGNN$\_{l\leq1}$| 0.52 | 0.79 | 0.88 | 1.13 | | HEGNN$\_{l\leq2}$| **0.47** | **0.78** | 0.90 | 0.97 | | HEGNN$\_{l\leq3}$| 0.48 | 0.80 | **0.84** | 0.94 | | HEGNN$\_{l\leq6}$| 0.69 | 0.86 | 0.96 | **0.86** | --- Rebuttal Comment 1.1: Comment: Thank you for the comprehensive response. The fact that GWL has only empirically investigated the inner-symmetric inputs is something I have missed and indeed addresses the original limitation I have raised. The results in Table S3 seem interesting and important; I propose that the authors include it in the main text in the revision, rather than including it in the Appendix. It would have been great if the performance of MACE on MD-17 was also included, but I guess the results are still valid as it involves comparisons to current SOTA (response to Q4 of reviewer op9c). I have adjusted my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your constructive feedback and willingness to raise the score, which greatly inspired us. We will definitely incorporate your insightful suggestions into our revisions.
Summary: The paper studies the benefit of using higher order steerable features in geometric GNNs and theoretically identifies classes of symmetric geometric graphs where methods using only order-1 (or low order) features are guaranteed to fail. With this in mind, the authors propose a simple and efficient way to integrate higher order features in the EGNN architecture. Strengths: Clearly written and well motivated paper. The proposed architecture seems a valid alternative to existing steerable architectures, enabling the use of higher order features with reduced computational complexity. The paper also includes a few interesting theoretical insights about the failure cases of EGNN. Weaknesses: While the overall novelty is a bit limited (the message that higher order features are necessary for expressiveness has been theoretically and empirically explored in a number of previous works in the literature), the theoretical derivation of the failure cases and the proposed architecture can be useful contributions to the community. See questions below. Technical Quality: 4 Clarity: 4 Questions for Authors: Eq 7, 8: do I understand correctly that this kind of model can not pass any information between features $\tilde{v}^{(l)}$ of different orders $(l)$ beyond the invariant quantities used to compute $m_{ij}$? Then, that essentially means that, after the initial features are extracted via the first convolution with spherical harmonics in Eq. 5, *the rest of model* is invariant to independent rotations by O(3) of each feature $\tilde{v}^{(l)}$, right? This seems to suggest there should be a gap in performance with models like TFN or SEGNN, which instead allow for different frequencies (features of different orders) to interact inside the model. Can you elaborate on this? Sec 5.2: it seems important to compare with SEGNN [21] since that work was also motivated by the idea of adapting EGNN to support higher order steerable features, if I remember correctly. Table 3: I would expect the model size to scale quadratically with the maximum frequency $L$ since $\sum_{l=0}^L (2l+1) = (L+1)^2$, but the runtimes in table 3 do not show this pattern. Why is the case? Table 4: what are the SOTA on these datasets? It would be good to include the performance achieved by some previous works to give an idea of whether these are competitive results or not Why introducing Theorem 3.5 when Theorem 3.6 seems more general and practical? Also, I think these theorems hold for any compact (not necessarily finite) subgroups by using the Peter-Weyl theorem. Line 226: is this a more efficient way to implement this fairly sparse operation? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Some limitations are addressed in the supplementary material. For completeness, I think the authors could comment on the possible gap in expressiveness with other steerable methods like TFN and SEGNN which allow for features of different orders to interact (see my first question above). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! We provide the following responses to your concerns: > **Q1: The performance gap between scalarization-based models (e.g. HEGNN) and high-degree steerable models (e.g. TFN, SEGNN).** It is true that our HEGNN exclusively passes invariant quantities (inner products of high-degree representations) between features $\tilde v^{(l)}$ of different degrees $l$, unlike the expensive CG tensor product used in TFN or SEGNN, which considers all possible interactions between different frequencies. Our model can be seen as a generalization of the scalarization trick in EGNN to high-degree representations. While the scalarization trick might somehow sacrifice model expressivity in theory, it has shown significantly better efficacy and efficiency in practice compared to conventional high-degree models, as demonstrated by EGNN paper and also our experiments here. Additionally, Theorem 4.1 indicates that passing inner products of full degrees is sufficient to recover the information of all angles between each pair of edges, affirming the theoretical expressivity of our HEGNN in characterizing the geometry of the input structure. > **Q2: Comparison with SEGNN is important**. Nice suggestion! We have additionally compared our model with SEGNN in the table below, using the default settings from the public code. It is observed that SEGNN performs significantly worse than our model (in Table S1). We conjecture that the equivariant non-linear functions applied to the CG tensor products in SEGNN make it difficult to converge to a desirable solution during training, resulting in suboptimal performance. We will include these results in the revised version. > **Q3: The relationship between runtimes and model degree.** Thank you for this valuable observation. While the model size does scale quadratically with the maximum frequency $L $, this does not mean that the exact runtimes in Table 3 follow the same pattern. By leveraging PyTorch, which is based on CUDA toolkit, computations for different frequencies can be largely parallelized. As a result, the actual implementation cost is lower than the quadratic complexity. > **Q4: SOTA on N-body and MD-17**. Thanks for the reminder. For the N-body dataset, we follow the settings in FastEGNN [a], which was recently published and can be regarded as the SOTA model. As shown in the table below, in all cases from 5-body to 100-body, our HEGNN models generally outperform FastEGNN, verifying the effectiveness of our model's design. For the MD-17 dataset, we apply the evaluation protocol from the GMN paper [b]. We thoroughly checked the methods in the citations of this paper and found that the GMN-L method proposed in [b] generally performs the best, thus consider it the SOTA method. Our HEGNN-6 achieves comparable performance to GMN-L in most cases. Given that GMN-L requires careful handcrafting of constraints for chemical bonds into the model design, our model's ability to derive promising results without such enhancements supports its competitive performance. [a] Zhang Y, Cen J, Han J, et al. Improving Equivariant Graph Neural Networks on Large Geometric Graphs via Virtual Nodes Learning. [b] Huang W, Han J, Rong Y, et al. Equivariant graph mechanics networks with constraints. **Table S1:** Results on N-body |N-body($\times10^{-2}$)|**5-body**|**20-body**|**50-body**|**100-body**| |-|-|-|-|-| |FastEGNN|0.66|0.81|1.03|0.99| |SEGNN|1.68|2.63|3.30|Nan| |HEGNN≤1|0.52|0.79|0.88|1.13| |HEGNN≤2|**0.47**|**0.78**|0.90|0.97| |HEGNN≤3|0.48|0.80|**0.84**|0.94| |HEGNN≤6|0.69|0.86|0.96|**0.86**| **Table S2:** Results on MD-17 |MD-17|Aspirin|Benzene|Ethanol|Malonaldehyde|Naphthalene|Salicylic|Toluene|Uracil| |-|-|-|-|-|-|-|-|-| |GMN|10.14±0.03|**48.12±0.40**|4.83±0.01|13.11±0.03|0.40±0.01|0.91±0.01|**10.22±0.08**|0.59±0.01| |GMN-L|**9.76±0.11**|54.17±0.69|4.63±0.01|**12.82±0.03**|0.41±0.01|**0.88±0.01**|10.45±0.04| 0.59±0.01| |HEGNN≤1|10.32±0.58|62.53±7.62|4.63±0.01|12.85±0.01|0.38±0.01|0.90±0.05|10.56±0.10|0.56±0.02| |HEGNN≤2|10.04±0.45|61.8±5.92|4.63±0.01|12.85±0.01|0.39±0.01|0.91±0.06|10.56±0.05|0.55±0.01| |HEGNN≤3|10.20±0.23|62.82±4.25|4.63±0.01|12.85±0.02|**0.37±0.01**|0.94±0.10|10.55±0.16|**0.52±0.01**| |HEGNN≤6|9.94±0.07|59.93±5.21|**4.62±0.01**|12.85±0.01|**0.37±0.02**|**0.88±0.02**|10.56±0.33|0.54±0.01| > **Q5: The relationship between Theorems 3.5 and 3.6.** Thanks for your comment. In fact, Theorem 3.5 addresses the more general case, while Theorem 3.6 is a special case for practical convenience (since $I-0=I$ is full-rank). For practical purposes, we consider only the finite symmetric group because only geometric graphs with one or two nodes can exhibit infinite symmetric groups, representing single atoms or diatomic molecules in physics [a]. We do not consider these cases due to their trivial topological structure in realistic tasks. Generalization to compact subgroups can be easily achieved by replacing summation with integration, with volume elements chosen to be normalized and invariant to group action. From a mathematical perspective, Theorems 3.5 and 3.6 essentially calculate the dimension of the invariant space of a group. This can be achieved by calculating the trace of the projection map of the subgroup [b]. Generalization to compact subgroups is more naturally done by introducing the projection map of compact subgroups, as directly found in [b]. However, for readability, we have not delved deeply into representation theory in this paper. In our future work, we may present our results and further findings using the language of representation theory. [a] Landau, L. D., and E. M. Lifshitz. Quantum Mechanics: Non-Relativistic Theory. [b] Fulton, William, and Joe Harris. Representation Theory. > **Q6: Efficiency of operators in Line 226.** Yes, this is a more efficient way. In this form, we are able to apply e3nn, which is the most commonly used library for processing spherical harmonics. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed reply. Regarding Q3, the ratio between the models' sizes for $l\leq6$ and $l\leq1$ is larger than 10 (since $(1+1)^2=4$ vs $(6+1)^2=49$). It still seems surprising to me that a model 12 times larger is only 30% slower. Am I missing something? Is it possible that the model's bottleneck is somewhere else? Or, are other hyper-parameters changed in the model across different values of $L$? Regarding Q4 and Table S1, the (N=5 body) performance reported for SEGNN seems very different from the one in Table 1 in the original paper [21] (1.68 vs 0.43), why is that the case? is the dataset different from the one used in [21]? --- Rebuttal 2: Comment: We sincerely appreciate your further comments. We provide more explanations to address your concerns below. > **Q1: The relationship between model time consumption and degree.** Your question is both valuable and inspiring! We sincerely apologize for our previous unconsidered response. Upon re-evaluating the complexity of our model, we now believe that your insight is correct—the model's bottleneck lies not only in the value of the degree $L$ but also in the number of channels. As illustrated in the following table, for our models with different $L$ values, the number of channels for type-0 features is fixed to 65, while it is no more than 3 for higher-type features. In other words, the computations for type-0 features account for the majority of the overall running cost. This could explain why the runtimes in Table 3 do not scale quadratically with respect to $L$. Additionally, there are other factors that affect the model's time consumption, including the initialization in Eq. (5) and the coefficient calculations in Eqs. (5) and (8). Thank you once again for your constructive comment! We will definitely include the above explanations around Table 3 in the revised paper. **Table S9:** Total dimensions used in HEGNN of different degrees ||HEGNN$\_{l\leq1}$|HEGNN$\_{l\leq2}$|HEGNN$\_{l\leq3}$|HEGNN$\_{l\leq6}$| |-|-|-|-|-| |Type-0|65|65|65|65| |Type-1|3|3|3|3| |Type-2|-|1|1|1| |Type-3|-|-|1|1| |Type-4|-|-|-|1| |Type-5|-|-|-|1| |Type-6|-|-|-|1| |Total Dim|74|79|86|119| > **Q2: The gap in performance of SEGNN between our results and original paper.** Yes, the performance discrepancy of SEGNN can be attributed to the use of a different dataset than the one employed in the original SEGNN paper [21]. In our paper, we utilized the dataset constructed by [a] to provide a more thorough evaluation of the compared methods. This dataset includes a wider range of scenarios, spanning from 5 bodies to 100 bodies, beyond the 5-body case studied in [21]. It is important to note that [a] employed different preprocessing and dataset splitting compared to [21], which may explain why SEGNN's performance differs even in the same 5-body scenario as [21]. In addition to the dataset in [21], here we have conducted additional experiments using the SEGNN dataset, following the official code from the SEGNN paper for dataset construction. We then re-evaluate the performance of EGNN, SEGNN, and our proposed HEGNN models across various scenarios, ranging from 5 bodies to 100 bodies. The results are presented in the following table. **Table S10:** Comparison between EGNN, SEGNN and HEGNN on N-body from [21] |N-body ($\times 10^{-2}$)|5-body|20-body|50-body|100-body| |-|-|-|-|-| |EGNN|0.71|1.04|1.15|1.31| |SEGNN|**0.50**|6.61|9.34|13.46| |HEGNN$\_{l\leq1}$|0.71|0.97|**0.93**|1.22| |HEGNN$\_{l\leq2}$|0.65|**0.91**|1.05|**1.14**| |HEGNN$\_{l\leq3}$|0.63|0.99|1.05|1.27| |HEGNN$\_{l\leq6}$|0.72|1.05|1.11|1.28| As observed, SEGNN's performance closely aligns with the results reported in its original paper for the 5-body case (0.50 vs. 0.43), supporting the reliability of our implementation. However, SEGNN shows significantly higher losses in scenarios with a larger number of bodies, indicating a decline in performance as task complexity increases. We conjecture that the steerable nonlinear functions used in SEGNN could make the learning process more difficult when the number of bodies increases. Although our proposed HEGNN model performs worse than SEGNN in the 5-body case, it consistently exhibits lower loss across various configurations, demonstrating its superior ability to handle increasing task complexity. We appreciate your valuable attention to this detail and will include the above results and analyses to further strengthen the validity of our findings. [a] Huang W, Han J, Rong Y, et al. Equivariant graph mechanics networks with constraints. --- Rebuttal 3: Comment: Thanks for the reply, I think that cleared most of my doubts! I encourage the authors to include these details and results in the final version of the manuscript. I still feel like the novelty is limited, but the paper includes some interesting insights and the proposed method seems effective and seems to be thoroughly evaluated. For these reasons, I maintain my positive recommendation. --- Rebuttal Comment 3.1: Comment: Thank you for your positive recognition of our paper. We will definitly add the details and results in the final version of our paper. Regarding your mention of our limited novelty, we found that we did not explain it clearly in our previous response. Please allow us to add a few points here. - Although there have been some works studying the importance of using high-degree features, we are the first to rigorously explain the expressivity barriers of low-degree representations. Theorem 3.5 and Theorem 3.6 are simple and concise, yet insightful and of independent technical interest, as also pointed out by Reveiwer 3KcU. - To the best of our knowledge, our HEGNN model is the first to exploit high-degree steerable features using the scalarization trick. This approach not only surpasses prior higher-order models in efficiency but also achieves superior results in experimental results. Besides, its easy implementation may facilitate reproducibility. - In Theorem 4.1, we demonstrate that employ inner products of full degree can recover the information of all angles between each pair of edges, thus affirming the theoretical expressivity of our HEGNN in capturing the geometry of the input structure. Once again, thank you very much for your valuable comments and suggestions that help improve our paper.
Rebuttal 1: Rebuttal: # General Response We sincerely thank all reviewers and ACs for their time and efforts on reviewing the paper. We are very glad that the reviewers recognized the problems we studied, the theories we proposed, and the models we built, and their comments really gave us a lot of inspiration and enlightenment. For the symmetric structure problem we studied and the theory we proposed, the reviewers thought it was well motivated (Reviewer op9c, vhjv), the theoretical analysis was correct (Reviewer 7wT3), thorough (Reviewer vhjv), insightful and of independent technical interest (Reviewer 3KcU). And our proposed HEGNN, a new method of introducing steerable features through scalarization technique, was evaluated as original (Reviewer 7wT3, 3KcU), promising (Reviewer vhjv), enable the use of higher order features with reduced computational complexity (All Reviewers). The whole article is consistent with the theory and experimental results (Reviewer 7wT3, vhjv), and clear and easy to understand (Reviewer 7wT3, 3KcU). We also appreciate the reviewers for the insightful comments. To address their concerns, we have added additional experiments as follows. **Table S1** shows the results of FastEGNN, SEGNN and HEGNN on the N-body dataset with different numbers of particles. **Table S2****/S6** shows the experimental results of HEGNN, GMN & GMN-L on the MD-17 dataset. **Table S3** compares HEGNN with EGNN on symmetric graphs when adding noise perturbations, in order to anlalyse how the symmetry-breaking factors influence the performance. **Table S4** shows the results of HEGNN and MACE on the N-body dataset with different numbers of particles. **Table S5** compares HEGNN with more relevant baselines under two different protocols on the N-body dataset. **Table S7** reports the inference times and model sizes of high-degree models. **Table S8** compares the performance between 4-layer EGNN, 3-layer HEGNN$\_{l\leq 1}$, and 4-layer HEGNN$\_{l\leq 1}$ on the N-body dataset, to explain why HEGNN$\_{l\leq 1}$ is better than EGNN.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Estimating the Hallucination Rate of Generative AI
Accept (poster)
Summary: This paper presents a Bayesian interpretation of in-context learning. This interpretation enables us to calculate the hallucination rate. In other words, by considering in-context examples as observations, the posterior distribution can be computed and the hallucination rate is derived. Numerical experiments verify the practicality of the interpretation. Strengths: 1. The hallucination is one of the most important problems of large language models. This paper explored a theoretical understanding of the problem in in-context learning. 2. The idea to interpret the in-context examples as the observation, described in Section 2.1, is original and interesting. Weaknesses: In summary, I suggests that the authors clarify the focus of this paper either on the hallucination rate for NLP tasks or on the error rate for general in-context learning tasks. 1. As discussed in L222, the proposed theory cannot be applied as is in NLP tasks and an approximated metric is required. Although the introduction of this paper focused on NLP tasks, the usefulness of the proposed theory in this area is limited, as reported in L265 with Figure 6. 2. Hallucination is a complex phenomenon with a wide range of causes [1]. As mentioned in the above weakness, the effectiveness of this theory is limited to the synthetic regression task. In light of these, I am concerned whether it is appropriate to call the phenomenon examined in this paper a hallucination rate rather than an error rate. 3. The claim of Section 2.2 is to use the cumulative probability of the token generation probability distribution as the confidence for token y. However, the use of cumulative probability as the confidence of token y is a common technique in NLP, for example in nucleus sampling [2]. To demonstrate the importance of the theory presented in this section, I consider it necessary to show its universality when other statistics S are used. Otherwise, I also suggest moving this section to the appendix as noted in the following weakness. 4. L146 says `Algorithm 1 can be understood intuitively without appealing to any Bayesian arguments’. I agree with this statement, and I consider that the necessity of the presented theory depends largely on the discussion in Appendix B.1. Because the appendix is just supplementary material and cannot play a role in justifying the main claim, I suggest the authors should revise the structure of the paper. [1] Huang+, A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions, arXiv:2311.05232. [2] Holtzman+, The Curious Case of Neural Text Degeneration, ICLR2020. Technical Quality: 2 Clarity: 2 Questions for Authors: See Weaknesses. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 4 Limitations: This paper discussed broader social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read and comment on our work. We are glad you found the ideas original and the contribution excellent. Although we have some disagreements about the weaknesses, we think your concerns are important and aim to clarify them below. **W1: As discussed in L222, the proposed theory cannot be applied as is in NLP tasks and an approximated metric is required.** We appreciate the opportunity to clarify things here. The MHR *is not* an approximation of the PHR that enables the application of the theory to language tasks. Instead, it is an evaluation metric used to help assess whether the PHR estimator is working as intended when we don't have access to the ground truth data-generating process and thus cannot compute and compare against the THR. We discuss the shortcomings of the empirical error rate as an evaluation metric in L242-247 and offer the MHR as a complimentary evaluation metric to address those shortcomings. We present both results to serve as a proxy for the THR. We stress that the MHR is an evaluation metric and that the PHR, as presented, is directly applicable to NLP tasks. **W2.1: "...the effectiveness of this theory is limited to the synthetic regression task..."** In light of our above clarification, we believe we have provided evidence that our theory is appropriate for NLP tasks. Notably, the PHR can accurately track the error rate for in-capability tasks, as shown in the first three panes of Figure 6. Moreover, when the model is assumed to define the in-context learning distribution, the PHR accurately tracks the MHR in all settings, as shown in Figure 5. Finally, there are non-NLP models where the theorem applies directly, for example, conditional neural processes [24] and prior fitted networks [56]. **W2.2: "Hallucination is a complex phenomenon ... [is it] appropriate to call the phenomenon examined ... a hallucination rate rather than an error rate."** We agree that hallucination is a complex phenomenon. We believe that progress toward a holistic understanding of hallucination requires a principled deconstruction of the phenomenon into manageable steps. And we argue that one cause of hallucination is the *context* being "underspecified," even if the *model* can perform a given task. For example, imagine asking, "Who is the president?" An LLM distribution over possible responses may include all the different ways to convey Joe Biden, Emmanuel Macron, the president of Microsoft, or a historical president of a fictional society. If your $f^*$ corresponds to semantic equivalents of "Joe Biden," all other responses are considered hallucinations. This notion of hallucination is consistent with the recent Nature paper, "Detecting hallucinations in large language models using semantic entropy." The intriguing thing about such hallucinations is their resolution through improving the context. For example, asking, "It is August 6th, 2024, who is the president of the United States?" shrinks all uncertainty concerning *which* president. This example is already quite complex to start thinking about from mathematical first principles, so we distill the essence of it and tackle such hallucinations through the lens of in-context learning. This perspective allows us to take a rigorous mathematical approach to defining hallucinations and hallucination rates while still being able to study the phenomenon using the practical setting of LLMs. Moreover, it offers a firm foundation to start building back toward understanding this type of hallucination in less structured language tasks. **W3.2: To demonstrate the importance of the theory ... it [is] necessary to show its universality when other statistics S are used. Otherwise, I ... suggest moving this section to the appendix ... Algorithm 1 can be understood intuitively without appealing to any Bayesian arguments ... the authors should revise the structure of the paper.** We acknowledge these concerns but disagree that the paper should be restructured. 1. The PHR, its theoretical justification, and its extension to modern conditional generative models are the primary contributions of this paper. 2. Though we have not found a generalization of our result within a larger class of statistics, we have used similar techniques to derive an estimator for the mutual information to quantify uncertainty. Including the theory in the main text can inspire other researcher to use similar techniques in their algorithms. Specifically, we can use a similar (but not identical) technique to derive an estimator for the expected entropy $\mathbb{E}_{p(f|x, D_n)} \left[ \text{H}(Y \mid f, x) \right]$. *Theorem: Assume that the conditions of Theorem 1 hold for $F$ and $(X, Y)$.* Then, $$ \mathbb{E} \left[ \text{H}(Y \mid f, x) \right] = \mathbb{E} \left[ \lim_{N \rightarrow \infty} \text{H}(Y \mid (x_i, y_i)^N_{n+1}, x, D_n) \right]. $$ where the first expectation is taken with respect to $p(f \mid x, D_n)$ and the second with respect to $p((x_i, y_i)^\infty_{n+1} \mid D_n, x)$ 3. There are non-NLP models where the theory does apply directly, such as conditional neural processes [24]---which demand exchangeability from the stochastic process---or prior data-fitted networks [56]. **W3.1: The claim of Section 2.2 is to use the cumulative probability of the token generation probability distribution as the confidence for token y ...** We understand the author's concerns but respectfully disagree. We presented a rate using the CDF of probabilities because it is intuitive and widely used, which we believe strengthens and clarifies the theory. Additionally, we emphasize that we use the CDF differently than in nucleus sampling. Rather than examining $p(y \mid x, D)$ as done in nucleus sampling, we propose an algorithm for examining $p(y \mid x, f)$ where $f$ is an implicit latent. Nucleus sampling mixes uncertainty about $p(f \mid D)$ with uncertainty about $p(y \mid x, f)$, whereas our method disentangles them. --- Rebuttal 2: Comment: Thank you for offering such a thorough response to my concerns. The misunderstandings about MHR have been clarified. I also agree that PHR can accurately track the error rate in the first three in Figure 6. For these reasons, I raised my score. However, I did not raise the score further higher mainly because of W2 and W4. Regarding W2, the paper is largely devoted to experiments and discussions in the synthetic task and those in NLP are insufficient to explore the hallucination problem. Regarding W4, the justification of the contribution is described in the appendices. --- Rebuttal Comment 2.1: Comment: We sincerely appreciate the additional time you have taken to consider our rebuttal and your decision to raise your score. Regarding W2, we ask you to consider further the language model experiments we have run in response to reviewers VGJP and 6dN5. For reviewer VGJP, in a more realistic setting where labels can take on multiple semantically equivalent values, we show that the PHR maintains its performance in predicting the error rate and MHR in [Figure Error](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/vgjp/w1_figure_error.png) and [Figure MHR](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/vgjp/w1_figure_mhr.png). And for reviewer 6dN5, we show that our methods also work for the Gemma-2 9B model, where this [figure](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/6dn5/figure1.png) indicates that the PHR can accurately predict the empirical error rate for Gemma-2 9B, and this [figure](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/6dn5/figure2.png) shows that the PHR accurately predicts MHR for Gemma-2 9B. Regarding W4, we will add a discussion summarizing the justification we present in the appendix in the main paper to improve the paper structure.
Summary: This paper presents a method for predicting the hallucination rate of in-context learning with conditional generative models. Strengths: - NA Weaknesses: - Unclear how many queries would be require to validate the approach, as this will be very dependent of task, context, and LLM - this is clearly a missing element of the evaluation - Evaluation on Llama 2 only - more is definitely require to validate the approach - No identification of where the hallucination might come from - Title is misleading - this is basically a hallucination identification method - Non cost effective as this require multiple LLMs calls Technical Quality: 2 Clarity: 2 Questions for Authors: - Why not more LLMs used for experimentation? - What could be the right number of LLM call needed? - What parameters is the function of hallucinate rate? What are the optimal parameter? for which task? for which LLMs? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: cf. weakness and questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our work. It is disheartening that you did not find it appropriate to attribute strengths to our work. We hope to convince you of what we believe constitutes a significant positive contribution. We have addressed your concerns below and look forward to a fruitful discussion. **Q1: Why not more LLMs used for experimentation?** We have also run Gemma-2 9B on the "in-capability" SST2, Subjectivity, and AG News tasks. In this [figure](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/6dn5/figure1.png), we show that the PHR can accurately predict the empirical error rate for Gemma-2 9B. In this [figure](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/6dn5/figure2.png), we show that the PHR accurately predicts MHR for Gemma-2 9B. We believe that these responses answer your concerns. If there are any remaining sources of concern, we are happy to discuss them further. **Q2: What could be the right number of LLM call needed?** Using the notation in Algorithms 1 and 2, $N$ (the number of generated examples), $M$ (the number of Monte Carlo samples), and $K$ (the number of generated examples) are ideally as large as possible because increasing them will improve the accuracy of the PHR estimator. Considerations like the computational budget will influence the values chosen. In addition to these considerations, $N$ may need to be small enough so that the generated examples do not overflow the maximum context length seen during model training, which could result in non-sensical generations. The number of generated examples $K$ may also be influenced by the choice of $\epsilon$. For example, evaluating the PHR at $\epsilon=0.1$ would require at least $K=10$ response samples, whereas evaluation at $\epsilon=0.02$ would need at least $K=50$ response samples. In the context of our language model experiments, we have shown that a setting of $N=5$, $M=10$, and $K=50$ yields good results. Moreover, our ablations summarized in Table 1 of the Appendix do not give strong evidence that increasing these values (i.e. increasing computational cost) for these experiments leads to a significant gain in performance. We are happy to add this discussion to the main paper or the Appendix. **Q3: What parameters is the function of hallucinate rate? What are the optimal parameter? for which task? for which LLMs?** We discuss the parameters $N$, $M$, and $K$ above. Now we consider the parameter $\epsilon$. An intuitive way to think about the $\epsilon$ parameter is from a decision-making perspective. Algorithmic decision-making system design includes error tolerances. For example, a chat application may be acceptable if it hallucinates only 5% or 10% of the time (i.e., it provides useful correct answers in 95% or 90% of user interactions). When the model accurately defines the task distribution, the PHR directly addresses such considerations. For example, if the tolerated rate of hallucination is 5%, then a PHR estimate greater than 0.05 could inform the deferral of a response. With this constraint, the system designer would choose $\epsilon \leq 0.05$ to estimate hallucination rates at least as small as 5%. Related to the above discussion, we can see that the computational cost grows with the tolerance stringency. If the tolerated hallucination rate is 10%, we need $K \geq 10$; but if it is 1%, we need $K \geq 100$ to resolve the desired rate. Safety-critical situations may justify the additional computational cost. Ideally, these considerations are independent of task and model. However, we are transparent about the accurate estimation of the PHR depending on the model defining a conditional distribution that closely approximates the in-context learning conditional distribution. We maintain that this is a significant and self-contained first step in estimating hallucination rates in general, and we leave it to future work to address the case when tasks are "out-of-capability." **W5: Computational complexity** It is true that our method, like other sampling-based methods, carries the computational cost of generating multiple responses to a given query and the cost of producing additional examples. We contend that this additional computational cost is justifiable for at least the following reasons: 1) The method holds for transformer models that define probabilities factorized as in equations (1-3). Model classes that satisfy this requirement exist, like conditional neural processes [24] and prior fitted networks [56], which are implementable with smaller Transformer models. 2) We believe the method has scientific value. It helps determine if the uncertainty in an LLM's probabilities is due to a lack of in-context data or if it arises from inherently stochastic answers (i.e., the difference between reducible and irreducible uncertainty). We expect a highly entropic answer in both cases, but the latter would result in a low hallucination rate estimate, whereas the former would result in a high estimate. Future research could explore the scenarios where high entropy results in reducible or irreducible uncertainty. This analysis could clarify whether a model answers incorrectly due to genuine data ambiguity or insufficient in-context data. For such scientific inquiries, the computational constraints are less of a concern. 3) Our paper provides a comprehensive underlying theory. Though our initial estimator of the PHR is computationally complex, more efficient methods exploiting clever solutions to the integrals presented could be developed in the future. --- Rebuttal 2: Comment: Dear Reviewer, thank you again for the time and effort you've dedicated to reviewing our work. Your insights and feedback have significantly contributed to improving the quality of our paper. We believe our responses address the concerns raised in your reviews and are committed to making the necessary revisions to clarify any uncertainties. As the discussion period is nearing its conclusion, if you find that any aspects of our responses require further clarification or discussion, we are eager to engage in constructive dialogue.
Summary: The paper focuses on the in-context learning setting of generative AIs, such as large language models, and proposes a new definition for hallucination. It introduces a novel metric, PHR, along with a corresponding estimation method. Unlike traditional metrics, the proposed metric accounts for label ambiguity resulting from unclear task specifications. Strengths: - The paper highlights a significant issue with using error rates to evaluate hallucination, that the ambiguity on labels caused by the lack of task specification has been overlooked. - The definition and algorithm of PHR are logically sound and make sense to me. Weaknesses: - The notations are messy and confusing. - - There is misuse and abuse of uppercase random variables and lowercase instantiations. For example, the distribution of $F$ is referred to as $p(f)$ in lines 87 and 121, while it is referred to as $p(F)$ in line 124. This problem is exacerbated in Theorem 1, making it hard to read. - - The notations PHR/THR can refer to both the definitions and the algorithms, which adds to the confusion. For instance, in the sentence on line 174, “we evaluate the PHR in a setting where we know the true mechanism f* so that we can compare it directly against the true hallucination rate (THR),” PHR appears to refer to the algorithm, while THR seems to refer to Definition 3. - The experimental setup is difficult to understand. - - For example, the experiments in Section 3.1 are supposed to evaluate the proposed method in a setting where $f*$ is known. However, what $f*$ is and how THR is calculated are not explained. To understand this, at least the way $p(y|x, f*)$ is set should be specified. Given the unused space at the end of the paper, there is no reason for the authors not to elaborate on the experimental settings. - - In Section 3.2, the authors attempt to justify the use of PHR by introducing another proposed metric called MHR. It is unclear how MHR could serve as a gold metric. Even if MHR is a gold metric, why not just use MHR? - Typos: Line 31 "the" should be deleted. Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the weaknesses. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations have been well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and attention. We appreciate that you have identified several strengths, and recognize that your main criticisms are concerned with clarity and missing details. We respond to each of your comments below and have updated our manuscript accordingly. **W1.1: There is misuse and abuse of uppercase random variables and lowercase instantiations.** We thank the reviewer for their comment concerning the inconsistent notation for the prior over mechanisms $p(\mathrm{f})$. Indeed, there is a typo and for consistency with the rest of the paper, it should read $\mathrm{F} \sim p(\mathrm{f})$. As pointed out by the reviewer, our goal was to use uppercase to emphasize that a quantity is being treated as a random variable and lowercase when it is simply a number or a realized value. As an alternative, we propose to use lowercase letters throughout the paper (for example using lowercase in lines 88 and 87) and only to use uppercase when strictly necessary to emphasize the fact that we are dealing with a random variable (for example in theorem 1). We plan to correct the paper accordingly if you find this to be satisfactory, but we are open to any alternative suggestions you may have. **W1.2: The notations PHR/THR can refer to both the definitions and the algorithms** We recognize that using the PHR notation for both the definitions and algorithms is a source of confusion. We will update the manuscript to use $\widehat{\text{PHR}}$ when referring to the algorithm and $\text{PHR}$ to refer to the definition. **W2.1.1: "... in Section 3.1 ... what $f^\*$ is ... [is] not explained."** We briefly explain the data-generating process in Appendix E.1 but agree that we can be more explicit and we will include a clear definition in the main text. First, $f^*$ is going to be defined by a random ReLU neural network: $f^*_w(x) = w_2^\top\text{ReLU}(w_1 x)$, where $w_1$ and $w_2$ are instances of the $d\times1$ dimensional random variable $W \sim \mathcal{N}(0, 1)$. Queries $x$ are instances of the uniformly distributed random variable $X \sim \mathcal{U}(-2, 2)$. Responses $y$ are then instances of the normally distributed random variable $Y \sim p(y \mid x, f^*) := \mathcal{N}(f^*_w(x), \sigma^2)$, with $\sigma = 0.1$. Example datasets are shown in Figure 8a of the Appendix, where each color corresponds to a different sample of the network weights $w = \{w_1, w_2\}$. **W2.1.2: " ... in Section 3.1 ... how THR is calculated ... [is] not explained."** While the calculation is included in the attached code, we agree that it should be included in the paper. The quantiles of $p(y \mid x, f^*) := \mathcal{N}(f_w^*(x), \sigma^2)$ are computed analytically by $Q_{\frac{\epsilon}{2}}(f^*, x) := f_w^*(x) + \sigma \sqrt{2}\text{erf}^{-1}(2(\frac{\epsilon}{2}) - 1)$ and $Q_{1 - \frac{\epsilon}{2}}(f^*, x) := f_w^*(x) + \sigma \sqrt{2}\text{erf}^{-1}(2(1 - \frac{\epsilon}{2}) - 1)$. True hallucinations are then counted as $y$ samples that are either less than $Q_{\frac{\epsilon}{2}}(f^*, x)$ or greater than $Q_{1 - \frac{\epsilon}{2}}(f^*, x)$. The THR is then estimated as the empirical average over the response $y$ samples for a given query $x$. For a specific $f^*$, we illustrate examples of such hallucinations in panes 1 and 3 of Figure 2, along with the $\epsilon$ confidence intervals of $p(y \mid x, f^*)$ as the shaded blue region. **W2.2.1: In Section 3.2, the authors attempt to justify the use of PHR by introducing another proposed metric called MHR.** The MHR is not proposed to justify the PHR. Instead, it is a metric to evaluate whether the PHR is operating as expected when we do not have access to the true distribution and thus are unable to calculate the THR. It is to be considered along with the error rate. As we describe in lines 242-247, the error rate is grounded in human-labeled responses but does not capture the subtlety of the true conditional distribution of responses given a query and a set of in-context examples. The MHR assumes that the model predictive distribution is true and compares the distribution of responses to a query $x$ given the original in-context examples and a set of generated examples to the distribution of responses to a query $x$ given a set of additional true examples. In expectation, these two distributions should be equivalent if things are working correctly. **W2.2.2: It is unclear how MHR could serve as a gold metric.** We would not call the MHR a "gold" metric, because it is decoupled from the true distribution of responses to a query, and instead assumes that the model distribution under additional examples is true. Because we cannot calculate the "gold standard" THR, we report results for both the error rate and MHR instead. **W2.2.3: Even if MHR is a gold metric, why not just use MHR?** It does not make sense to use the MHR instead of the PHR because it assumes that you have more in-context examples than are available. This is fine for an evaluation metric, but not ok for a predictor of hallucinations. **W3: Typos: Line 31 "the" should be deleted.** Thank you, we have addressed this typo. We hope that this response addresses your concerns and look forward to discussing anything that may remain unclear. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ efforts in their rebuttal and their commitment to improving the presentation. However, since I cannot preview the modifications, I am unable to assess the extent of the enhancements. Additionally, I agree with the critique from other reviewers regarding the inadequate evaluation of the proposed method in non-synthetic scenarios. Although the proposed method's applicability to real-world cases remains unclear, I find the idea promising and interesting to me. Therefore, I would like to keep my scores unchanged. --- Reply to Comment 1.1.1: Comment: Thank you for taking the additional time to consider our rebuttal. As fellow reviewers, we empathize with your trepidation as we have also witnessed promised changes fail to manifest in camera-ready versions. The above clarifications are straightforward: minor changes in notational consistency/disambiguation and the inclusion of details as we have described. Ultimately, we can only ask for your trust that we will make these changes and respect your decision. Regarding your concern about evaluation, we ask you to consider the additional language model experiments we have run in response to reviewers VGJP and 6dN5. For reviewer VGJP, in a more realistic setting where labels can take on multiple semantically equivalent values, we show that the PHR maintains its performance in predicting the error rate and MHR in [Figure Error](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/vgjp/w1_figure_error.png) and [Figure MHR](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/vgjp/w1_figure_mhr.png). And for reviewer 6dN5, we show that our methods also work for the Gemma-2 9B model, where this [figure](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/6dn5/figure1.png) indicates that the PHR can accurately predict the empirical error rate for Gemma-2 9B, and this [figure](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/6dn5/figure2.png) shows that the PHR accurately predicts MHR for Gemma-2 9B.
Summary: The paper proposes a method to estimate the hallucination rate for in-context learning (ICL) in a conditional generative model (CGM) from a Bayesian perspective. The authors assume the CGM samples from a posterior predictive distribution over a Bayesian model of a latent parameter and data. They define Posterior Hallucination Rate (PHR), along with True Hallucination Rate (THR) and Model Hallucination Rate (MHR), and provide theoretical proofs and empirical demonstrations to show that they can accurately estimate the true probability of hallucination. Strengths: S1. The paper tackles an important and well-known issue of hallucination in language models. By focusing on estimating the hallucination rate, the research contributes to improving the reliability and trustworthiness of outputs from CGMs. This makes the work highly relevant in the context of ongoing AI issues. S2. The paper introduces new metrics, particularly PHR, for future evaluation of model hallucinations. This can potentially advance methodology for assessing model output reliability. S3. The metrics are well-defined and the pseudo-algorithms help in understanding how the metrics are used. Weaknesses: W1. While the authors tested their methods on six datasets from various domains, the range of label content and types may be limited. This limitation could introduce confounding factors. For instance, tasks where one label can be a substring of another (e.g. “similar” vs “dissimilar”) might not fully represent the complexity of real-world scenarios. W2. The labels used in the datasets, such as “entailment” vs “not”, do not have a clear semantic meaning and relation like “positive” vs “negative” OR “subjective” vs “objective.” This absence of semantic relation might influence the ICL performance. Testing whether the semantic meaning of the labels affect performance and hallucination rate would strengthen the findings. A suggested approach is to use neutral labels such as letters (A,B,etc.) to determine if the lack of semantic meaning of the labels impacts the hallucination detection. W3. Related to W1, the datasets used in the experiments have either 2 or at most 4 categories. This limited variety raises questions about how the method performs with more complex classification tasks. For example, how would the method fare in English-to-French word translation, where the number of possible categories is significantly higher? Expanding the experiments to include tasks with a larger number of categories would provide a more comprehensive assessment of the method’s robustness. Technical Quality: 2 Clarity: 3 Questions for Authors: Q1. What constitutes an “in-capability” task and “out-of-capability” task? Q2. In Figure 10, why do the lines, especially for context length 50 graphs, stop earlier? What does that imply about the performance or behavior of the model at longer context lengths? Q3. The paper references Fong et al. [22] proposing Martingale Posterior distribution that uses posterior predictive. Are there any baseline results for this approximation that could be compared to PHR to evaluate which distribution better defined models? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have addressed the positive and negative societal impacts of their work. It would be beneficial to further discuss and address the dataset choices and potential limitations, such as label diversity and number of categories, to provide a comprehensive view of the conditions under which the method has been tested. This would enhance the understanding of the method’s applicability and generalizability ensuring a more thorough assessment of its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the effort made in reviewing our work. We appreciate that you have identified several strengths and recognize that your main criticisms concern evaluation and clarity of concepts. We offer clarifications, have run additional experiments, and have updated our manuscript accordingly. **W1: While the authors tested their methods on six datasets from various domains, the range of label content and types may be limited.** We recognize the importance of understanding the posterior hallucination rate for closer to real-world scenarios and offer additional experiments. Here, we represent task labels by a set of semantically similar responses. For example, the SST2 task has two labels, *1* and *0*. The valid responses for the *1* label are [positive, favorable, good]. The valid responses for the *0* label are [negative, unfavorable, bad]. The favorable/unfavorable dichotomy is analogous to your similar/dissimilar example, representing a more realistic setting where there may be a set of valid responses. In this setting, we show that the PHR maintains its performance in predicting the error rate and MHR in [Figure Error](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/vgjp/w1_figure_error.png) and [Figure MHR](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/vgjp/w1_figure_mhr.png). **W2: The labels ... such as “entailment” vs “not”, do not have a clear semantic meaning and relation like “positive” vs “negative” ...** We have run the entailment tasks (RTE and WNLI) under two additional settings to understand the effects of semantic meaning. First, we changed the negative label from *not* to *not entailment* to enforce semantic difference. This change did not have a noticeable effect on the results for these tasks. Next, we took your suggestion and changed the negative label from *not entailment* to *A* and the positive label from *entailment* to *B*. This change resulted in a significant difference in the in-context learning dynamics. As can be seen in the [Figure Error](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/vgjp/w2_figure_error.png) and [Figure Entropy](https://anonymous.4open.science/r/figures-phr-rebuttal-E611/vgjp/w2_figure_entropy.png), the semantic information given by using entailment as a positive label helps the RTE task converge to a non-trivial error rate as the number of in-context examples increases. This change, in effect, makes the tasks even more out-of-capability. Thus, while interesting, we think it is out of scope for the primary evaluation of our methods. **W3/W4: ... experiments to include tasks with a larger number of categories would provide a more comprehensive assessment ... . ... discuss and address the dataset choices and potential limitations ...** We agree that this work will offer insights but argue that by limiting our perspective to in-context learning, we enable a rigorous definition of hallucinations and hallucination rates. Moreover, we provide a firm foundation to generalize to more complicated and less structured natural language settings, which we want to pursue in future work. Please see our response to reviewer Vp93 for a more detailed argument. **Q1: What constitutes an “in-capability” task and “out-of-capability” task?** We consider a task in-capability for an LLM if in-context learning leads to better-than-random accuracy as the number of in-context examples grows. In contrast, an out-of-capability task would only yield trivial predictions and thus hallucinate frequently. As a general example of this distinction for most LLMs, sentiment analysis may be a task that is in-capability, and theorem proving may be a task that is out-of-capability. We make this distinction because the model predictive distribution is not a good approximation of the ICL posterior predictive for out-of-capability tasks, and the requisite assumptions are not satisfied. **Q2: In Figure 10, why do the lines, especially for context length 50 graphs, stop earlier? What does that imply about the performance or behavior of the model at longer context lengths?** The lines end earlier because the PHR and THR are generally lower for longer context lengths. This trend is sensible—if the model were a perfect estimator of the reference distribution, then both quantities would converge to epsilon as the context length (number of in-context examples) increases. However, as we discuss in lines 211-218, the model distribution is subject to estimation divergences, and the difficulty of modeling the shrinking discrepancies between the ICL and model conditional distributions increases with the number of in-context examples. It is valuable to note that although the PHR is underestimated and increases the estimator error, the precision of the model is higher in this setting. **Q3: Are there any baseline results for [the Martingale Posterior] that could be compared to ...?** The martingale posterior is a generalization of standard Bayesian posteriors. It proposes a way of propagating uncertainty by specifying conditional distributions of the form $p(y_{n+1}| y_{1}, \dots, y_n)$ and then sampling data in a way similar to our algorithm: sample $y_{n+1} \sim p(y_{n+1}| y_{1}, \dots, y_{n})$, then condition on this sample, and sample $y_{n+2} \sim p(y_{n+2}| y_{1} \dots, y_{n+1})$, and so on. Notably, the conditional distributions $p(y_{n+1} | y_1, \dots, y_n)$ need not be posteriors in the Bayesian sense but simply valid conditionals. The samples for sufficiently large $n$ are understood as if they came from a posterior. Although this algorithm is related to what we do, they do not offer a measure corresponding to the PHR, so it is not directly applicable as a baseline. Moreover, they implement their method with Gaussian copulas, which does not apply to our setting. We hope this response addresses your concerns and look forward to discussing any points you would like more elaboration on in the next phase. --- Rebuttal 2: Comment: Dear Reviewer, thank you again for the time and effort you've dedicated to reviewing our work. Your insights and feedback have significantly contributed to improving the quality of our paper. We believe our responses address the concerns raised in your reviews and are committed to making the necessary revisions to clarify any uncertainties. As the discussion period is nearing its conclusion, if you find that any aspects of our responses require further clarification or discussion, we are eager to engage in constructive dialogue. --- Rebuttal Comment 2.1: Comment: Thank you for your response. The authors have adequately addressed my initial concerns, and as a result, I have raised my score. --- Reply to Comment 2.1.1: Comment: Thank you for the additional time you have taken to consider our rebuttal. We are pleased to have addressed your initial concerns regarding the empirical evaluation, limitations, and clarity. We are encouraged by and sincerely appreciate your decision to raise your score.
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful and constructive feedback. We are particularly encouraged by the recognition of several strengths across different aspects of our paper, which we summarize below. **Reviewer VGJP** acknowledges the significance of our work in addressing hallucination in language models, highlighting its relevance to current AI reliability concerns. They commend the introduction of the Posterior Hallucination Rate (PHR) for evaluating model output reliability, noting that this well-defined metric and accompanying pseudo-algorithms enhance comprehension and potential future applications in the field. **Reviewer yYgH** appreciates our identification of the significant issue of label ambiguity in using error rates to evaluate hallucination, noting its often-overlooked impact due to the lack of task specification. They also find the definition and algorithm for the Posterior Hallucination Rate (PHR) to be logically sound and well-reasoned. **Reviewer Vp93** recognizes the importance of addressing hallucination in large language models and commends our theoretical exploration of this issue for in-context learning. They find the interpretation of in-context examples as observations described in Section 2.1 to be original and intriguing. These remarks are encouraging and affirm the core strengths of our theoretical, methodological, and empirical contributions. In response to the concerns and suggestions raised, we have reviewed our manuscript to address each point individually. We have made specific revisions and clarifications to refine our theoretical and methodological explanations. Each reviewer's feedback is considered to ensure our responses are thorough and reflective of the effort made to improve our manuscript's quality. Once again, we thank all reviewers for their valuable feedback. We look forward to discussing our responses with each of you further.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation
Accept (spotlight)
Summary: Both low-rank and orthogonal adaptation techniques can effectively adapt large-scale pre-training models to downstream tasks. This work proposes an new adaptation methods based on Householder reflections(HR). It discloses the connection between low-rank and orthogonal adaptation and builds a unified adapter-based fine-tuning framework. the proposed method can further save the amount of learnable parameters and achieve superior performance when compared to existing methods. Strengths: The disclosed relationship between low-rank and orthogonal adaptation is meaningful in building a unified adapter-based fine-tuning framework. The HRA technique saves a lot of learnable parameter,the experimental results are strong. Weaknesses: The structure of the paper is not well developed. The abstract and the introduction need some revision, the motivation for building a unified framework is not clear, what’s the pros and corns of the low-rank and orthogonal adaptation techniques? What kind of advantages are they trying to combine from both techniques? The authors mentioned the gap between the two techniques, does this gap have a negative effect on the adaptation? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Section 3,an overview about the motivation, novelty and organization is needed at the beginning of section 3. It is rambling to start with the details of the proposed method. 2. In the experiments, the parameter r is empirically set to some fixed value, please provide some advices on setting r for real application. 3. What about the training cost and inference time of the proposed method when compared to the others? 4. In section 4.3, which part of the model is applied with the proposed HRA? Does each layer has a similar r? 5. citation 33 is published in 2023. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: no. please talk about the risk of using such models in real applications, also the potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciation of our work and constructive comments. Below, we try to resolve your concerns one by one. **Q1: Improve the structure of the paper and highlight the motivation for building a unified adaptation framework.** **A1:** We have detailed the pros and cons of LoRA and OFT in the Related Work section. Regarding their advantages, LoRA assumes that weight changes during model adaptation have a low "intrinsic rank," while OFT preserves pre-trained knowledge by maintaining the pairwise angles between neuron vectors. As for their drawbacks, LoRA cannot ensure the preservation of angles between neuron vectors, and OFT can only achieve low-rank weight updates when $(R−I)$ is low-rank. HRA combines these two strategies, leveraging their advantages jointly while suppressing their drawbacks simultaneously. Moreover, building a unified adaptation framework is insightful for revisiting the recent rapid development of various adaptation methods, which may help inspire new technical routes. We have placed an overview of the motivation, novelty, and organization at the end of the Introduction. We plan to polish our paper in the final version further. Thanks for your suggestion. **Q2: How to set r for real applications?** **A2:** **We have discussed the setting of $r$ in the above general response, which may have resolved your concern to some extent.** In this study, we set $r\leq8$ for most datasets and tasks because this setting is sufficient to achieve superior adaptation results with fewer trainable parameters. Therefore, we empirically recommend starting with $r=8$. If the model performance is not satisfactory, increasing $r$ will usually work better. As we mentioned in the above general response, like AdaLoRA, we will consider adjusting the $r$ of HRA adaptively as our future work, but it is not the main contribution of this paper. **Q3: The training and inference time of HRA.** **A3:** **We have shown the training time and computational efficiency of HRA in the general response.** After training, we multiply the learned orthogonal matrices with the weight matrices, leading to a new model without increasing parameters. Therefore, like LoRA and OFT, HRA does not change the model's inference time. **Q4: In section 4.3, which part of the model is applied with the proposed HRA? Does each layer have a similar r?** **A4:** We follow the experimental settings of LoRA and OFT for a fair comparison. For the stable diffusion model, we apply HRA to its attention modules. In this study, we apply the same $r$ to the weight matrices of each attention module. Note that such a simple setting has resulted in superior performance. In the future, we can leverage the same idea of AdaLoRA, adjusting $r$ for different layers. **Q5: Reference about OFT. Citation 33 was published in 2023.** **A5:** Thank you for pointing out this mistake. We will correct it in the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. It addresses all of my concerns from the initial review. --- Reply to Comment 1.1.1: Comment: Thanks for your response. It would be nice if our responses helped evaluate our work further and lead to a higher final score.
Summary: his paper proposes a simple yet efficient adaption method, namely HRA, which finetunes a pretrained model by multiplying each frozen weight matrix with an orthogonal matrix constructed by a chain of learnable Householder reflections. The authors interpret HRA as an adaptive LoRA that retains theoretical guarantees of preserving pre-training knowledge of OFT, somewhat bridging the gap between LoRA and OFT. Number of trained parameters and computational complexity are analyzed. Experiments on several pretrained models (DeBERTa, LLaMA2, Stable Diffusion) show that HRA with fewer learnable parameters and suitable orthogonality regularizing strength achieves superior performance performances than existing methods, demonstrating the effectiveness of HRA for different downstream tasks. Strengths: This paper is well motivated and well written. The authors theoretically show that HRA can be formulated as an adaptive LoRA, providing a new perspective that bridges OFT to LoRA, which is insightful. To show the effectiveness and wide applicability of HRA, this paper has fairly conducted various types of experiments on different tasks including traditional NLP tasks GLUE, LLM tasks GSM8K/MATH, and multimodal tasks text2image generation. Weaknesses: This paper claims that HRA inherits the theoretical guarantee of OFT on the retention of pretraining knowledge. However, this paper seems a bit overclaim about this without experiments back it up. Technical Quality: 3 Clarity: 4 Questions for Authors: What are the limitations of HRA? How is the wall clock cost, comparing HRA to LoRA during training? Could this paper show the time cost comparing HRA to LoRA? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: I would like to see the authors to discuss on the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciation of our work. **We have resolved your concerns about the limitations and the computational efficiency of HRA in the above general response.** For your remaining concerns, we provide our answer below. **Q1: The evidence on the retention of prior knowledge.** **A1:** Thanks for your constructive suggestion. To verify our claim, we fine-tune LLaMA-2 7B on the MATHQA dataset by LoRA and HRA, respectively, and check the degradation of model performance on classic NLP tasks, including typical language tasks in ARC, HellaSwag, MMLU, and Winogrande, and a coding task in HumanEval. For a fair comparison, we apply the same number of trainable parameters and the same batch size for LoRA and HRA. Ideally, after adaptation, we hope that the model can still maintain its high performance in the NLP tasks. | Model | ARC | HellaSwag | MMLU | Winogrande | HumanEval | | :-----| :----: | :----: | :----: | :----: | :----: | | LLaMA2 7B | 49.74 | 58.90 | 45.92 | 74.11 | 12.80 | | LLaMA2 7B fine-tuned by LoRA | 48.81 | 56.89 | 40.60 | 71.27 | 11.59 | | LLaMA2 7B fine-tuned by HRA | 49.57 | 57.72 | 41.20 | 73.32 | 13.41 | According to the above results, we find that compared to LoRA, HRA retains more of the original model's knowledge, whose performance degradation is less severe than LoRA's. In the HumanEval task, its performance is even better than that of the original model (which we think is because the MATHQA dataset contains many samples relevant to logic and reasoning tasks and thus is useful in the HumanEval task). We will add this experimental result in the final paper. Thanks again for your suggestion. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks to the author's for their responses. I am glad to see the additional results and more confident to vote acceptance for this paper. --- Reply to Comment 1.1.1: Comment: Thanks for your appreciation of our work. We will include additional experimental results and corresponding analyses in the final version of the paper.
Summary: The paper proposes a simple but effective adaptation method based on Householder reflection matrix. The authors show that this method is closely related to low-rank adaption. Diverse experiments demonstrated the effectiveness of the proposed method in comparison to a few baselines. Strengths: 1. The idea of using Householder reflection matrix to do deep learning mode adaption is novel. 2. The proposed method is simple and effective. 3. The experiments are extensive. Weaknesses: 1. The proposed method is a special case of Orthogonal Fine-Tuning. The major difference is the use of the Householder reflection matrix. This means the novelty is relatively limited. 2. The paper does not explain why the proposed method outperforms baselines such as LoRA and OFT. 3. In Figure 1(c), the table, $\lambda=1e-4$ is better than $\lambda=\infty$ and $0$. The authors should report the performance of other values of $\lambda$ such as $1e-1$ to $1e-8$ to show the impact of $\lambda$. 4. According to Figure 1(c), it seems that the gain of the proposed method is many from the regularization. This raises a question---will orthogonal regularizations also improve LoRA and OFT? The authors should make a fair comparison and show the source of the improvement of the proposed method clearly. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The reviewer hasn't found the discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. Below we try to resolve your concerns one by one. **Q1: The novelty of the proposed method.** **A1:** We respectfully disagree with the comment that our work's novelty is relatively limited for the following three reasons. Firstly, to our knowledge, our work makes the first attempt to bridge the gap between low-rank and orthogonal adaptation techniques. Our HRA is a new implementation of OFT. At the same time, it is also an adaptive LoRA, as we mentioned in Section 3.3. Secondly, existing OFT restricts the interactions between different dimensions of the weight matrix due to the block diagonal structure of its orthogonal matrix, while BOFT overcomes this issue by time-consuming butterfly matrix multiplication. Unlike these two methods, our HRA implements a chain of Householder reflections to implement orthogonal adaptation, as discussed in Section 3.2. This implementation leads to better efficiency and performance. Thirdly, focusing on HRA, we analyze the impact of the orthogonality of reflection planes on the adaptation performance (Section 3.4), proposing an orthogonal regularizer in the training phase, which can further boost the adaptation performance. We believe all three of the above contributions are new to the research community, and thus, the novelty of our work is sufficient. Actually, **in your first comment in the Strengths session, you admitted the novelty of our work.** Therefore, we hope that the above response helps you reconsider our work. **Q2: The reasons for the superiority of HRA compared with LoRA and OFT are not explained.** **A2:** In fact, we have explained the reasons for the superiority of HRA. In particular, in Section 3.2, we compared HRA with OFT and BOFT on their implementations and computational complexity. **As shown in Lines 147-148 and 157-158, we have shown that HRA can be more efficient (i.e., using fewer trainable parameters) in a mild condition, which is easy to meet in our experiments.** Regarding the retention of pre-training knowledge, **we have shown in Lines 168-172 that HRA can preserve the angular information of weight matrices like OFT and BOFT do,** which is better than LoRA. In addition, by introducing an orthogonal regularizer during adaptation, we control the orthogonality of reflection planes and thus help achieve a trade-off between the model capacity and regularity. Compared to LoRA and OFT, this mechanism can further reduce the risk of over-fitting, which helps further boost our performance. We plan to add this point to the end of section 3.4. **Q3: More experimental results with other values of $\lambda$.** **A3:** We conduct the experiments with other values of $\lambda$, and the results are as follows. | Method | #Param | GSM8K | MATH | | :-----| :----: | :----: | :----: | | $\text{LoRA}_{r=32}$ | 0.25% | 50.2 | 7.8 | | $\text{OFT}_{b=16}$ | 0.13% | 50.1 | 8.4 | | $\text{HRA}_{r=32,\lambda=\infty}$ | 0.12% | 52.8 | 9.2 | | $\text{HRA}_{r=32,\lambda=1e-1}$ | 0.12% | 53.6 | 8.3 | | $\text{HRA}_{r=32,\lambda=1e-4}$ | 0.12% | 56.3 | 9.3 | | $\text{HRA}_{r=32,\lambda=1e-8}$ | 0.12% | 53.6 | 8.6 | | $\text{HRA}_{r=32,\lambda=0}$ | 0.12% | 55.8 | 9.0 | We can find that a) the performance of HRA is relatively stable concerning the change of $\lambda$, b) in the wide range of $\lambda$, HRA is superior to the baselines, c) even if ignoring the regularizer $(\lambda=0$), our method still outperforms the baselines, which demonstrates the effectiveness of implementing orthogonal adaptation based on Householder reflections. **Q4: Will the orthogonal regularizer also improve LoRA and OFT?** **A4:** This is an interesting question. Firstly, it should be emphasized that **our comparison experiments are fair because, in all Tables and Figures, we have considered the HRA with $\lambda=0$ (i.e., without the orthogonal regularizer) and compare it with the baselines.** The experimental results show that HRA can outperform LoRA, OFT, and other competitors even without the regularizer. **In other words, the superiority of our method is mainly from the proposed Householder reflection chain, and the proposed regularizer can further boost performance.** Secondly, it is nonsense to apply the orthogonal regularizer to OFT because the block diagonal structure of its orthogonal parameter matrix has ensured that the columns of the matrix are orthogonal to each other. For BOFT, the columns of different butterfly orthogonal parameter matrices are also orthogonal to each other. In other words, **OFT and its variants already have intrinsic and strict orthogonality constraints**. Finally, for the vanilla LoRA in the formulation $W+AB$, we can impose our orthogonal regularizer on its parameter matrices, which was never considered before. Although our regularizer is motivated by the orthogonality of reflection planes rather than LoRA, to resolve your concern, we imposed our regularizer on the matrix $B$ of LoRA and tested it in the mathematical reasoning task. | Method | #Param | GSM8K | MATH | | :-----| :----: | :----: | :----: | | $\text{LoRA}_{r=16}$ | 0.12% | 47.3 | 6.6 | | $\text{LoRA}_{r=16,\lambda=1e-4}$ | 0.12% | 47.7 | 6.8 | | $\text{HRA}_{r=32,\lambda=0}$ | 0.12% | **55.8** | **9.0** | | $\text{HRA}_{r=32,\lambda=1e-4}$ | 0.12% | **56.3** | **9.3** | The above results verify our claim --- the superiority of HRA is mainly caused by the Householder reflection chain. **Note that, without a strong theoretical motivation like the orthogonality of reflection planes, we have too many options to combine LoRA with our regularizer**, e.g., imposing our regularizer to $A$, $B$, $AB$, and so on. Considering so many variants of LoRA is out of the scope of this work. We hope the above responses can resolve your concerns. We are willing to discuss with you in the next phase if you have any other questions. --- Rebuttal Comment 1.1: Comment: I appreciate the rebuttal. I have raised the rating to 5. --- Rebuttal 2: Comment: We are grateful that our response has positively impacted the rating. If you have any further comments or questions, we would be happy to address them.
Summary: This paper proposed a new model fine-tuning method, called Householder reflection adaptation (HRA). The main idea of HRA is to fine-tune the model with a series of Householder reflections. By virtue of the Householder reflection, the orthogonality of the tuning matrix can be obtained, the number of tuning parameters can be reduced, and the computational cost can also be saved. Besides, with simple math, it can be shown that HRA also share the low-rank property with LoRA. Experiments on several tasks have been conducted, to demonstrate the advantages of the proposed method. Strengths: The idea is simple, yet very effective. Empirical results show that the proposed HRA can achieve better fine-tuning results with less trainable parameters. Weaknesses: I do not find major flaws of this work. Technical Quality: 3 Clarity: 3 Questions for Authors: I have no further questions. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do not explicitly discuss the limitations and broader societal impacts of this work. Since the proposed method could be applied to LLMs, which have been used by more and more people, the broader societal impacts should be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciation of our work. We believe your concerns have been resolved in our general response, and we hope that our response can help increase your confidence score. We are willing to discuss with you in the next discussion phase if you have any other questions.
Rebuttal 1: Rebuttal: We thank all the reviewers for their appreciation of our work. Below, we provide a general response to their common concerns and a specific response to each reviewer's remaining questions. **Q1: Discussions on limitations and societal impacts of this work.** **A1:** Regarding the limitations of HRA, we believe the main concern is the setting of hyperparameters (i.e., the rank $r$ and the weight of orthogonal regularizer $\lambda$). Similar to LoRA, the rank $r$ of our HRA determines the trade-off between the number of trainable parameters and the training efficiency. In this study, we set $r$ to ensure that the number of our trainable parameters is smaller than those of baselines. Of course, inspired by the recent variants of LoRA, e.g., AdaLoRA, we can adjust the rank $r$ adaptively, which is not the main contribution of this work and thus is left to be our future work. For the weight $\lambda$, which determines the trade-off between the expressiveness and the regularity of the adapter, we set it in a range to quantitatively analyze the impact of orthogonality. In particular, following the suggestion of Reviewer 7LSp, we conducted mathematical reasoning experiments with different values of $\lambda$, and the results are as follows. | Method | #Param | GSM8K | MATH | | :-----| :----: | :----: | :----: | | $\text{LoRA}_{r=32}$ | 0.25% | 50.2 | 7.8 | | $\text{OFT}_{b=16}$ | 0.13% | 50.1 | 8.4 | | $\text{HRA}_{r=32,\lambda=\infty}$ | 0.12% | 52.8 | 9.2 | | $\text{HRA}_{r=32,\lambda=1e-1}$ | 0.12% | 53.6 | 8.3 | | $\text{HRA}_{r=32,\lambda=1e-4}$ | 0.12% | 56.3 | 9.3 | | $\text{HRA}_{r=32,\lambda=1e-8}$ | 0.12% | 53.6 | 8.6 | | $\text{HRA}_{r=32,\lambda=0}$ | 0.12% | 55.8 | 9.0 | We can find that a) the performance of HRA is relatively stable concerning the change of $\lambda$, b) in the wide range of $\lambda$, HRA is superior to the baselines, c) even if ignoring the regularizer $(\lambda=0$), our method still outperforms the baselines. These results demonstrate the effectiveness and robustness of implementing orthogonal adaptation based on Householder reflections. In the future, we will consider further analyzing the impacts of $\lambda$ in theory. Regarding the societal impact of our work, we believe HRA can further simplify the adaptation of LLMs and promote more LLM-based downstream applications. Similar to LoRA and OFT, HRA may suffer from some potential issues like inappropriate (even illegal) abuse, amplifying the social prejudice intrinsically in LLM when the fine-tuning data are biased, and so on. **It should be noted that these potential issues are neither purely attributed to the technique itself nor specific to HRA—LoRA and OFT also suffer from them. Solving these issues depends on developing new techniques, social policies, and data quality improvement.** How to mitigate (even eliminate) these issues is left to our future work. We will add the above content to the final version of our paper and the attached NeurIPS paper checklist. **Q2: The comparisons for various methods on their training time and memory costs.** **A2:** Following the suggestions of Reviewers 1vEG and mtnZ, we adapt LLaMA2-7B on the MetaMathQA dataset by HRA and other baselines and test their training time and GPU memory costs. For a fair comparison, we conducted all the experiments on 8 NVIDIA RTX A6000 GPUs, and all the methods applied the same batch size and almost the same number of trainable parameters. | Method | #Param | Training time (hours) | Peak memory usage (GB) | GSM8K | MATH | | :-----| :----: | :----: | :----: | :----: | :----: | | LoRA | 0.12% | 45 | 279 | 47.3 | 6.6 | | OFT | 0.13% | 53 | 282 | 50.1 | 8.4 | | HRA | 0.12% | 30 | 287 | 56.3 | 9.3 | We can find that HRA's peak memory usage is comparable to that of baselines, while its training time is less, and its accuracy in the downstream GSM8K and MATH tasks is better. These results demonstrate HRA's superiority in computational efficiency and adaptation performance. We hope that the above response can resolve your concerns. Thanks again for your positive feedback.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PEAC: Unsupervised Pre-training for Cross-Embodiment Reinforcement Learning
Accept (poster)
Summary: This paper presents a new unsupervised reinforcement learning (URL) algorithm called Pre-trained Embodiment-Aware Control (PEAC). PEAC is designed to tackle cross-embodiment tasks, explicitly considering the influence of different embodiments to facilitate exploration and skill discovery across embodiments. Experimental results demonstrate that PEAC notably enhances adaptation performance. Strengths: 1. The paper is well-written and easy to follow. The intuitive figures and logically structured text make PEAC clear and understandable. 2. The author(s) conduct comprehensive experiments and present a solid work. 3. As embodied intelligence gains increasing attention, cross-embodiment research is indeed an important direction. PEAC undoubtedly contributes to the community in this regard. Weaknesses: 1. Although the author(s) compare various standard and SOTA baselines in the experiments, they lack strong persuasiveness. Since the author(s) focus on cross-embodiment tasks, it is apparent that the baselines, which do not utilize embodiment information $e$, exhibit poorer performance. Therefore, it is unclear whether PEAC's high performance is due to the use of embodiment information $e$ or the utilization of cross-embodiment intrinsic rewards $\mathcal R_{\text{CE}}$. To highlight the paper's contribution, I suggest that the author(s) modify some baselines to naively incorporate embodiment information and then compare with them. For example, conditioning the policy and reward of DIAYN on embodiment $e$, etc. 2. Experiments conducted on Robosuite effectively showcase "cross-embodiment," while other environments may not do so as prominently. By modifying parameters like "mass" and "damping" in DMC tasks or introducing joint torque failures in Unitree A1, the setting becomes more like a Meta RL, where the robot's morphology remains relatively unchanged. In locomotion tasks, "cross-embodiment" could be exemplified by robots like Gym-MuJoCo's Walker2d, DMC's Walker, and Humanoid, where they all use two legs for alternating walking and aim to enhance mobility. However, they differ in morphology. Demonstrating whether PEAC can learn the same shared knowledge of bipedal walking from these diverse morphologies may better illustrate the topic of "cross-embodiment." Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: My primary concern is that the baselines in the experiments do not utilize embodiment information $e$. Please refer to the first point in Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your review, especially the praise for PEAC's contributions to embodied intelligence. Below we will address all your concerns. **W1:** About baselines utilizing embodiment information $e$. **A:** Thank you for recognizing that we have included various standard and SOTA baselines. Following your valuable suggestions of adding baselines incorporating embodiment information, we have supplemented experiments of **LBS-Context** and **DIAYN-Context**, which inherit LBS and DIAYN by incorporating embodiment information $e$, to better highlight the paper's contribution. Our results are below: | State-based DMC | Walker-mass | Quadruped-mass | Quadruped-damping | |-|-|-|-| | DIAYN | 266.7 | 501.8| 416.7 | | DIAYN-context | 364.4 | 564.2 | 499.5 | | LBS | 367.1 | 536.5 | 367.6 | | LBS-context | 365.9 | 514.5 | 417.4| | PEAC | **463.7** | **643.6** | **572.9** | | Image-based DMC | Walker-mass | Quadruped-mass | Quadruped-damping | |-|-|-|-| | DIAYN | 461.8 | 438.4 | 526.5 | | DIAYN-context | 572.0 | 457.6 | 546.6 | | PEAC-DIAYN | 587.3 | 578.4 | 572.8 | | LBS | 647.2 | 736.2 | 721.2 | | LBS-context | 655.1 | 739.1 | 722.5 | | PEAC-LBS | **727.6** | **755.2** | **740.2** | As shown in these two tables, LBS-Context and DIAYN-Context own superior performance compared with LBS and DIAYN respectively, and PEAC still significantly outperforms these baselines. Especially, in state-based DMC, the improvement in the performance of LBS-Context and DIAYN-Context is more remarkable than in image-based DMC. It may be because, in image-based DMC, we take DreamerV2 as the RL backbone, which has encoded historical information for making the decision and is possible to distinguish different embodiments through historical information to some degree. But in state-based DMC, we take DDPG as the RL backbones following previous works [1], which are Markovian policies and thus utilizing embodiment information during the pre-training is helpful (Lines 210-216). Consequently, our supplemented results **highlight that both the embodiment discriminator and cross-embodiment intrinsic rewards $\mathcal{R}_{\text{CE}}$ are effective for handling CEURL** (more details, including figures with aggregate metrics of these methods, are in the global response and the corresponding PDF). **W2:** About experimental settings that can better highlight cross-embodiment. **A:** Thank you for your recognition of our experiments on Robosuite. In DMC and Isaacgym, we mainly modify the property parameters or introduce joint torque failures because embodiments with different properties are **one of the most basic cross-embodiment settings**, which is worth paying attention to especially when we first research CEURL. Moreover, although these embodiments only modify property parameters, **their ability to handle tasks may differ a lot**, e.g. in our real-world experiments, when the front foot joint of the A1 robot fails, it can only drag its legs forward, while when the rear foot joint fails, it will lift its rear legs to move forward (Fig. 7 and video in the supplementary material). In experiments of DMC, besides settings with different properties like Walker-mass, we have also included settings with different morphology in **Appendix B.10**: **Walker-Cheetah**, where Walker and Cheetah both own two legs but their morphology differs a lot. Experiments in Appendix B.10 demonstrate that PEAC can also significantly outperform baselines and reveal PEAC's potential in handling exactly different embodiments. Moreover, following your constructive suggestions, we have also supplemented extensive experiments in DMC considering settings with greater differences across embodiments, **especially differing in morphology**: - **Walker-Humanoid**: Following your constructive suggestions, we consider the cross-embodiment setting that includes Walker robots and Humanoid robots. Their robot properties, robot shapes, and action spaces are all different. - **Walker-length** and **Cheetah-torsolength** [2]: The former includes walker robots with different foot lengths while the second one includes cheetah robots with different torso lengths. Thus robots' properties and morphologies are different (Figs of these embodiments are in the attached PDF of the global response). The results in these settings are below: | Walker-Humanoid | stand-stand | stand-walk | stand-run | walk-stand | walk-walk | walk-run | run-stand | run-walk | run-run | average | |-|-|-|-|-|-|-|-|-|-|-| | DIAYN | 286.8 | 320.2 | 307.2 | 185.4 | 175.4 | 209.7 | 75.4 | 61.1 | 55.4 | 186.3 | | PEAC-DIAYN | 231.6 | 430.3 | 371.7 | 247.4 | 210.7 | 245.2 | 88.3 | 85.1 | 69.9 | 220.0 | | LBS | 477.8 | 463.2 | 475.7 | **407.1** | 343.7 | 401.1 | 150.8 | 112.3 | 134.6 | 329.6 | | PEAC-LBS | **486.4** | **484.9** | **483.6** | **406.9** | **373.6** | **432.5** | **180.4** | **145.7** | **188.2** | **353.6** | | Walker-length | stand | walk | run| flip | average | |-|-|-|-|-|-| | LBS | **977.5** | 909.1 | **557.5** | 630.4 | 768.6 | | PEAC-LBS | **970.0** | **976.4** | 544.2 | **764.9** | **813.9** | | Cheetah-torsolength | run | run_backward | flip | flip_backward | average | |-|-|-|-|-|-| | LBS | 625.9 | **512.9** | 611.9 | 523.0 | 568.4 | | PEAC-LBS| **745.3** | 499.7 | **646.4** | **649.2** | **635.2** | As shown in these tables, PEAC can achieve much greater performance compared to baselines. These experiments indicate that PEAC has powerful abilities to handle various kinds of embodiment differences, including different morphologies. Also, designing more effective methods for handling complicated embodiments like Humanoid is a promising future direction. Reference: [1] URLB: Unsupervised Reinforcement Learning Benchmark [2] Learning Robust State Abstractions for Hidden-Parameter Block MDPs --- Rebuttal Comment 1.1: Comment: Thank you very much for the thorough response. The additional experiments have completely addressed my concerns, and I will raise the rating by 2 points. --- Rebuttal 2: Title: Thanks a lot for your feedback Comment: Dear reviewer RG3H, Thank you very much for your valuable suggestions and increasing the score. We will try our best to further improve our paper. best regards, Authors
Summary: This papaer introduces a novel setting, cross-emodiment unsupervised RL, which deals with pre-training good policies in reward free environments across different embodiments in order to perform well on downstream tasks on unseen embodiments. The authors propose the algorithm PEAC for unsupervised learning in this setting. Their method proposes a novel intrinsic reward which is derived from the minmax objective of maximizing the improvement in the downstream fine-tuned policy over the pre-trained policy minus a regularizing KL term, which ensures that the fine-tuned policy stays close to the pre-trained policy, under the worst-case extrinsic reward. From this objective, they derive an intrinsic reward for the unsupervised cross-embodiment setting with maximizes the (fixed) log prior over embodiments minus the (learned) log posterior over embodiments, which encourages the agent to seek areas of the state space that are common accross embodiments, or underexplored. They conduct extensive experiments in both simulated and real environments, including both state-based and image-based DMC tasks, and compare against a large number of pre-existing skill-based and exploratoin based unsupervised RL methods. Their results are impressive, showing improvement in almost all cases over the baselines. Moreover, they conduct real-world experiments in which their algorithm similarly outperforms. Strengths: - Introduces a novel setting: cross-embodiment unsupervised RL - Convincing motivation - the idea that most cross-embodiment training which focuses on a single task may not learn generalizable embodiment-specific skills rather than task-specific skills - Extensive experiments in both simulation and real-world settings with very strong results compared to baselines Weaknesses: - The paper is hard to follow and key details on the implementation are not fully explained. For example in line 68 a "embodiment context encoder" is mentioned but I do not see this explained any where else in the paper. How is the discriminator trained? - The extension of the information geometry analyses in Sec 3 do not seem particularly related to the main claims of the paper. This section serves primarly to highlight that policies may be stochastic, however it does not seem that the this information is utilized in the algorithm presented - The stated goal of the paper is to better learn embodiement-aware skills, but the policy is optimized to maximize the expectation over all embodiments. Would this encourage embodiment-aware skills or rather embodiment-agnostic skills? Technical Quality: 3 Clarity: 2 Questions for Authors: 1. It is not clear to me how the cross-embodiment policy handles different state spaces. For example in the cheetah-walker experiments in the appendix, how is the policy architecture designed such that it can take in different size state vectors? 2. What does super script c in the MDP M represent? 3. Can you please explain the equality in the last line of Eq 29? 4. It is not clear what is unique about the cross-embodiment setting for unsupervised RL versus the more generic contextual MDP setting. Why do embodiments needed to be treated differently? 5. If log(p(e)) is fixed, does it need to be in the reward? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: - As the authors note, their experiments are limited to settings where the embodiments are very similar. It is not clear how their method extends to dissimilar embodiments, particularly with different sized state vectors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the supportive review and constructive suggestions. Below we address the detailed comments. **W1:** About the embodiment context encoder in line 68. **A:** The embodiment context encoder is the embodiment discriminator in the main text of the paper, and we will polish our paper to make it consistent in context. As for the training of the discriminator, we utilize cross-entropy loss of the one-hot embodiment vector during the pre-training stage, following previous works [1]. We will clarify it in the revised versions. To better clarify the impact of the embodiment discriminator and the cross-embodiment intrinsic reward $R_{\text{CE}}$, we have supplemented ablation studies of **LBS-Context** and **DIAYN-Context**, i.e., combining LBS and DIAYN with our embodiment discriminator in PEAC but without $R_{\text{CE}}$. Our results are as below: |State-based DMC| Walker-mass|Quadruped-mass | Quadruped-damping | |-|-|-|-| |DIAYN|266.7| 501.8| 416.7| |DIAYN-context| 364.4|564.2| 499.5| |LBS |367.1|536.5| 367.6| |LBS-context | 365.9 | 514.5| 417.4| |PEAC| **463.7** | **643.6** | **572.9** | |Image-based DMC|Walker-mass | Quadruped-mass | Quadruped-damping | |-|-|-|-| |DIAYN|461.8| 438.4| 526.5 | |DIAYN-context| 572.0| 457.6 | 546.6 | |PEAC-DIAYN | 587.3| 578.4 | 572.8 | |LBS| 647.2|736.2|721.2 | |LBS-context|655.1|739.1 | 722.5| |PEAC-LBS|**727.6**|**755.2**|**740.2**| As shown in these two tables, LBS-Context and DIAYN-Context own superior performance compared with LBS and DIAYN respectively, and PEAC still significantly outperforms these baselines. Consequently, this ablation study **highlights that both the embodiment discriminator and cross-embodiment intrinsic rewards $R_{\text{CE}}$ are effective for handling CEURL** (more details and results are in the global response and the attached PDF). **W2:** About the information geometry analyses in Sec. 3. **A:** Yes, our extension of the information geometry analyses does not directly guide the design of PEAC. As the major contributions of this paper include the novel setting **CEURL** and our algorithm **PEAC**, **our information geometry analyses are mainly related to CEURL** by revealing its difficulty. Moreover, as you have mentioned, our information geometry analyses highlight that policies may be stochastic or history-related, thus our PEAC chooses to encode historical state-action pairs to the embodiment context in practice (Lines 213-216). We will emphasize it in the revised version. **W3:** About embodiment-aware or embodiment-agnostic skills? **A:** The goal of **embodiment-aware skill discovery** (Lines 232-252) is to learn similar skills for all these embodiments during the pre-training stage. However, as different embodiments own different properties/morphologies, their actions for achieving similar skills may differ a lot. Thus we name our method embodiment-aware skill discovery. Thanks again for your question, we will discuss the concept more clearly in the revised version. **Q1:** How to handle different action/state spaces? **A:** When handling state/action spaces with different dimensions, we will take the maximal dimension of these spaces $d$ and pad the states/actions into $d$ dimension by filling in zeros, following previous works [2]. **Q2:** What does $c$ in the MDP $\mathcal{M}^c$ represent? **A:** We use $\mathcal{M}$ to represent original MDP, $\mathcal{M}_e$ represent MDP with embodiment $e$, **$\mathcal{M}^c$ represent controlled MDP**, i.e., an MDP without rewards in the pre-training stage, and $\mathcal{M}_e^c$ represent controlled MDP with embodiment $e$. **Q3:** About the equality in the last line of Eq. 29. **A:** The last equality of Eq. 29 can be derived by the property of conditional probability and we abbreviate $\mathcal{M}_e^c$ as $e$: $$\log \frac{p_{\pi}(\tau)}{p_{\pi}(\tau|e)} = \log \frac{p_{\pi}(\tau)}{p_{\pi}(\tau, e) / p_{\pi}(e)} = \log \frac{p_{\pi}(e) }{p_{\pi}(\tau, e) / p_{\pi}(\tau)} = \log \frac{p(e)}{p_{\pi}(e|\tau)}.$$ Thanks for your question, we will provide the detailed derivation in the revised versions. **Q4:** About the connection between CEURL and contextual MDP. **A:** As we have mentioned in Lines 125-126, cross-embodiment RL can be formulated by contextual MDP. Moreover, as the cross-embodiment setting is practical for real-world applications and owns several unique properties, it introduces various new problems like cross-embodiment transfer [3] and has received widespread attention in embodied intelligence [3-5] (as mentioned by Reviewer RG3H). In this work, considering that different embodiments may own similar structures and learn similar skills, we propose unsupervised pre-train across embodiments to learn knowledge only specialized in these embodiments themselves, i.e., CEURL, which is a novel setting that existing contextual MDP can not cover and thus we combine contextual MDP with controlled MDP as our CEMDP to formulate our problem. **Q5:** Does $\log p(e)$ used in the reward? **A:** No, as we have mentioned in Lines 205-206, our cross-embodiment intrinsic reward eliminates $\log p(e)$ as it is fixed. **Limitation:** About experimental settings that can better highlight cross-embodiment **A:** We have supplemented more cross-embodiments with different morphologies in DMC: Walker-Humanoid, Walker-length, and Cheetah-torsolength. **More details and results are in the global response, where PEAC shows greater performance compared to baselines**. The results indicate that PEAC has powerful abilities to handle various kinds of embodiment differences, including different morphologies. Reference: [1] Multi-task reinforcement learning with soft modularization [2] Learning to Modulate pre-trained Models in RL [3] Cross-Embodiment Robot Manipulation Skill Transfer using Latent Space Alignment [4] Xirl: Cross-embodiment inverse reinforcement learning [5] Pushing the limits of cross-embodiment learning for manipulation and navigation --- Rebuttal Comment 1.1: Title: Look forward to further feedback Comment: Dear Reviewer xr64: We sincerely thank you again for your constructive feedback and suggestions. We have tried our best to answer the concerns raised, especially including more ablation studies to explain the embodiment context encoder, and are happy to clarify/discuss any further questions. We hope you may find our response satisfactory and raise your rating accordingly. We are looking forward to hearing from you about any further feedback. Best, Authors. --- Rebuttal 2: Title: Reviewer Discussion Needed Comment: Dear Reviewer, The discussion time is coming to an end soon. Please engage in the discussion process which is important to ensure a smooth and fruitful review process. Give notes on what parts of the reviewers responses that have and have not addressed your concerns. --- Rebuttal 3: Comment: Dear authors, thank you for the thorough response and for addressing my questions. The additional experiments on more complicated cross-embodiment settings have alleviated some of my concerns. However, I believe the setting is still very similar to multi-task or meta-RL and it is not clear to me why the problem must be treated as distinct except in the treatment of different state and action spaces and, in this respect, the method of zero-padding is slightly underwhelming. That said, the new experiments certainly make the paper's claims more convincing. I will raise my rating by 1 point. --- Rebuttal 4: Title: Thanks a lot for your feedback and supportive comments Comment: Dear Reviewer xr64: Thank you very much for your constructive feedback and for increasing the score. Below we will answer your questions, especially about the difference between CEURL and multi-task/meta RL. First, we briefly introduce multi-task/meta RL and CEURL: - **Multi-task RL**: train an agent to handle several different tasks, represented by MDPs **with rewards**, at the same time. - **Meta RL**: pre-train an agent in several training tasks, represented by MDPs **with rewards**. The pre-trained agent is required to fast adapt to testing tasks. Here training tasks and testing tasks are sampled from the same distribution [1,2]. - **CEURL**: pre-train an agent with several embodiments **without rewards**. The pre-trained agent is required to fast adapt to any testing tasks. Here we have no prior knowledge of the testing task. (lines 137-146) Consequently, from a modeling perspective, the biggest difference is that **CEURL pre-trains in an unsupervised manner**, i.e., without any extrinsic rewards, but **meta-RL pre-trains with extrinsic rewards**, which obeys the same distribution of the testing tasks. Moreover, from the concept perspective, multi-task/meta RL regards the embodiment and the task as a whole and utilizes an MDP to handle them together. Differently, CEURL distinguishes between the concept of embodiments and tasks. Thus CEURL hopes to learn embodiment-aware and task-agnostic knowledge by pre-training agents only with these embodiments without any tasks (lines 125-136). This knowledge is considered very helpful, especially when embodied intelligence requires handling different tasks across embodiments in the real world. Furthermore, we agree that zero-padding is a feasible but not the best method for handling different state/action spaces. And we believe that developing unified action/state embedding spaces is a promising future direction for handling the cross-embodiment setting. Thanks again for your kind comments and we will add these discussions into the paper to further improve it. Also, we are looking forward to hearing from you about any further feedback. best regards, Authors Reference: [1] Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks [2] Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables
Summary: This paper addresses the challenge of designing generalizable agents capable of adapting to diverse embodiments. The authors propose the CEURL setting as a novel framework for this problem and introduce the PEAC algorithm to address it. Recognizing that CEURL requires minimizing across different downstream tasks while maximizing the fine-tuned policy, PEAC tackles the issue through policy improvement and policy constraint mechanisms. PEAC can integrate with existing unsupervised reinforcement learning methods designed for single-embodiment settings, such as LBS and DIAYN. Through experiments conducted in both simulation and real-world settings, the effectiveness of PEAC is demonstrated. Strengths: Originality: This paper introduces a novel problem formulation, CEURL, and presents a new algorithm, PEAC. Quality: 1. The authors thoroughly discuss the challenges inherent in the CEURL problem definition, supported by mathematical proofs. This elevates CEURL from a simple definition to a high-quality problem formulation. 2.Convincing experiments involving real robots are presented. Clarity: The paper is well-written and easy to follow. Significance: This paper has the potential to significantly impact cross-embodiment control through its introduction of a new problem formulation and an innovative unsupervised pretraining framework. Weaknesses: 1. Among all the experiments presented, tasks for DMC and Issacgym are relatively limited, with only simple movements. I recommend adding more complex tasks like [1]. 2. In ablation study, the authors only show the results of different training timesteps. More ablation studies can help demonstrate how the algorithm works. 3. From my perspective, this paper is closely related to [2], which proposes an unsupervised skill discovery method through contrastive learning. While [2] focuses on single-embodiment scenarios, PEAC extends this approach to multiple embodiments, addressing the additional challenges posed by varying state and action spaces via introducing additional embodiment prior and posterior probabilities. [1] Gupta, Agrim, et al. "Embodied intelligence via learning and evolution." Nature communications 12.1 (2021): 5721. [2] Yang, Rushuai, et al. "Behavior contrastive learning for unsupervised skill discovery." International Conference on Machine Learning. PMLR, 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. For objective function (2), can you provide an ablation study using only the Policy Improvement part and only the Policy Constraint part? 2. In Figure 6 (right), PEAC-DIAYN shows the worst performance at 100k training steps. What is the cause? 3. In Appendix B.6, the performance of PEAC in Robosuite is fair comparing to that in DMC or Issacgym. Does PEAC work better for simple tasks (like walk, stand) than the tasks in Robosuite? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review and valuable suggestions. Below we address the detailed comments for all your questions. **W1:** About more complex tasks. **A:** In our experiments, tasks chosen in DMC and Isaacgym are **standard** and **widely used in unsupervised RL evaluation** [1, 2, 3]. These tasks are basic and important to evaluate agents' locomotion ability, and they can also combine to handle much more complex tasks like parkour [2]. Thanks for your suggestion and reference, referring to [4] including tasks of locomotion in complicated terrain like inclines, we have designed more complicated tasks for Walker-mass in DMC to evaluate their locomotion ability in complicated terrain like inclines: **Walker-mass-incline**. The results are shown below: | Walker-mass-incline | stand |walk|run| flip| average| |-|-|-|-|-|-| | LBS | 725.2 | 597.7 | 109.3 | 430.6 | 465.7| | PEAC-LBS | **793.8** | **670.3** | **268.9** | **509.1** | **560.5** | As shown in this table, the performance of LBS and PEAC-LBS decreases when locomoting in the incline terrain due to its complexity. **PEAC-LBS still significantly outperforms LBS**, expressing that our method, especially the cross-embodiment intrinsic rewards, benefits cross-embodiment unsupervised pre-training for handling more complicated tasks. **W2 \& Q1:** More ablation studies, like only using one of the two parts in Eq. 2. **A:** Thanks for your suggestion. There are two terms in Eq.2: the policy improvement part and the policy constraint part, representing that in the fine-tuning stage, we need to **maximize the policy performance** within **limited training steps**, respectively (Lines 182-188 in the paper). Thus they need to be considered together and it is unreasonable to use only one of these two parts for this setting. Moreover, we introduce a trade-off parameter $\beta$ in Eq.2 to **balance these two terms**, **which is set as 1.0 in all our experiments**. Consequently, we have supplemented ablation of $\beta$, of which the results are: |Ablation of $\beta$|Walker-mass|Quadruped-mass| |-|-|-| |PEAC-LBS($\beta$=1.0, default)| **727.6** | **755.2** | |PEAC-LBS($\beta$=0.5)|**726.9** |728.8| |PEAC-LBS($\beta$=2.0)|713.2|**750.1**| As shown in this table, when $\beta$ is smaller, the performance of PEAC-LBS decreases a little, and overall PEAC-LBS is stable with different $\beta$. Besides $\beta$, we have also supplemented more ablation studies, especially about our cross-embodiment intrinsic rewards $R_{\text{CE}}$, we consider **LBS-Context** and **DIAYN-Context** i.e., combining LBS and DIAYN with our embodiment discriminator in PEAC but without $R_{\text{CE}}$. Our results are as below: | State-based DMC | Walker-mass | Quadruped-mass | Quadruped-damping | |-|-|-|-| | DIAYN | 266.7 | 501.8| 416.7| | DIAYN-context | 364.4 | 564.2 | 499.5| | LBS | 367.1 | 536.5 | 367.6 | | LBS-context | 365.9 | 514.5 | 417.4| | PEAC | **463.7** | **643.6** | **572.9**| | Image-based DMC | Walker-mass | Quadruped-mass | Quadruped-damping | |-|-|-|-| | DIAYN | 461.8 | 438.4 | 526.5 | | DIAYN-context | 572.0 | 457.6 | 546.6 | | PEAC-DIAYN | 587.3 | 578.4 | 572.8 | | LBS | 647.2 | 736.2 | 721.2 | | LBS-context | 655.1 | 739.1 | 722.5 | | PEAC-LBS | **727.6** | **755.2** | **740.2** | As shown in these two tables, LBS-Context and DIAYN-Context own superior performance compared with LBS and DIAYN respectively, and PEAC still significantly outperforms these baselines. Consequently, this ablation study **highlights that both the embodiment discriminator and cross-embodiment intrinsic rewards $R_{\text{CE}}$ are necessary for handling CEURL** (more details of results are in the global response). **W3:** About the relation with the BeCL [2]. **A:** BeCL [2] mainly focuses on encouraging the agent to learn diverse skills related to corresponding behaviors via applying contrastive learning for single-embodiment skill discovery. As you have mentioned, our PEAC considers the cross-embodiment setting and thus is **orthogonal to these skill-discovery methods**. We have discussed and compared BeCL as well as other skill discovery methods in the paper. Especially, as shown in Sec.4, we have discussed the relationship between the objective of skill discovery (including BeCL) and our cross-embodiment objective, with a result of **a unified cross-embodiment skill-based adaptation objective** (Eq. 6-7). Thus our PEAC can naturally combine with these skill-discovery methods, including BeCL, and we propose PEAC-DIAYN as an example. Thanks again for your question, applying contrastive learning methods like BeCL to our PEAC for handling CEURL is indeed an interesting future direction. **Q2:** About the performance of PEAC-DIAYN in 100k pre-training steps. **A:** Thanks for your question. When the pre-training steps are small (like 100k), it is difficult to learn useful knowledge during the pre-training stage (like world models or our embodiment discriminator), and the uncertainty of results is relatively high, thus current research mainly considers the fine-tuning results after pre-training 2M in DMC [1-2]. **Q3:** About the performance of PEAC in Robosuite. **A:** In Robosuite, the overall performance of PEAC is still 10\% higher than all baselines, indicating the effectiveness of PEAC. Compared with DMC, the morphologies in Robosuite vary a lot more, thus baselines may also distinguish different embodiments directly via their morphology observation. Consequently, PEAC's leading advantage may slightly decrease, but still significantly outperforms baselines (We have also supplemented experiments in DMC with varying morphologies in the global rebuttal). Thanks for your question, we will discuss it more in the revised versions. Reference: [1] URLB: Unsupervised Reinforcement Learning Benchmark [2] Behavior contrastive learning for unsupervised skill discovery [3] Robot Parkour Learning [4] Embodied intelligence via learning and evolution --- Rebuttal Comment 1.1: Title: Look forward to further feedback Comment: Dear Reviewer 3BHi: We sincerely thank you again for your valuable and constructive comments. We have tried our best to answer the concerns raised, especially including more ablation studies and more complex tasks, and are happy to clarify/discuss any further questions. We hope you may find our response satisfactory and raise your rating accordingly. We are looking forward to hearing from you about any further feedback. Best, Authors. --- Rebuttal 2: Title: Reviewer Discussion Needed Comment: Dear Reviewer, The discussion time is coming to an end soon. Please engage in the discussion process which is important to ensure a smooth and fruitful review process. Give notes on what parts of the reviewers responses that have and have not addressed your concerns. --- Rebuttal 3: Title: Thank you again for your supportive comments at the end of the discussion Comment: Dear Reviewer 3BHi: Thanks a lot for your supportive and constructive comments. As the rebuttal phase is nearing its end, we sincerely hope that you could take a moment to review our rebuttal. Your feedback is invaluable to us, and we truly appreciate your time and efforts in this review process. Thank you again for your time and effort in helping us improve our paper. Best, Authors.
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable comments which help to further improve our paper. Here we first address the common concerns on **baselines/ablation studies** and **more complicated cross-embodiment settings**. Then, we provide a detailed response to the comments of each reviewer respectively. **Q1:** More baselines and ablation studies. **A:** We appreciate the reviewer's recognition of the comprehensiveness of experiments (Reviewer RG3H) and the adequacy of baselines (Reviewer xr64). **To better highlight the contribution of this paper**, we have supplemented various ablation studies following the valuable suggestions of Reviewer 3BHi and RG3H. First, we supplement **LBS-Context** and **DIAYN-Context**, i.e., combining LBS and DIAYN with the embodiment discriminator in PEAC, which utilizes embodiment information during the pre-training stage. Our results are as below: | State-based DMC | Walker-mass | Quadruped-mass | Quadruped-damping | |-|-|-|-| | DIAYN | 266.7 | 501.8| 416.7 | | DIAYN-context | 364.4 | 564.2 | 499.5 | | LBS | 367.1 | 536.5 | 367.6 | | LBS-context | 365.9 | 514.5 | 417.4| | PEAC | **463.7** | **643.6** | **572.9** | | Image-based DMC | Walker-mass | Quadruped-mass | Quadruped-damping | |-|-|-|-| | DIAYN | 461.8 | 438.4 | 526.5 | | DIAYN-context | 572.0 | 457.6 | 546.6 | | PEAC-DIAYN | 587.3 | 578.4 | 572.8 | | LBS | 647.2 | 736.2 | 721.2 | | LBS-context | 655.1 | 739.1 | 722.5 | | PEAC-LBS | **727.6** | **755.2** | **740.2** | **The figures with aggregate metrics of these methods are in the attached PDF**. As shown in these two tables and figures, LBS-Context and DIAYN-Context own superior performance compared with LBS and DIAYN respectively, and PEAC still significantly outperforms them. Especially, in state-based DMC, the improvement in the performance of LBS-Context and DIAYN-Context is more remarkable than in image-based DMC. It may be because, in image-based DMC, we take DreamerV2 as the RL backbone, which has encoded historical information for making the decision and is possible to distinguish different embodiments through historical information to some degree (Lines 210-216). Consequently, **this ablation study highlights that both the embodiment discriminator and cross-embodiment intrinsic rewards $\mathcal{R}_{\text{CE}}$ are effective for handling CEURL**. Moreover, we supplement ablation studies of the hyperparameter $\beta$ in Eq.2 ($\beta$ is set to 1.0 in all our experiments), which is for balancing the policy improvement term and the policy constraint term. | Ablation of $\beta$ | Walker-mass | Quadruped-mass | |-|-|-| | PEAC-LBS | **727.6** | **755.2** | | PEAC-LBS($\beta$=0.5) | **726.9** | 728.8 | | PEAC-LBS($\beta$=2.0) | 713.2 | **750.1** | As shown in this table, when $\beta$ is small, the performance of PEAC-LBS decreases a little, but PEAC-LBS is still stable with different $\beta$. **Q2:** About experiments with more complicated cross-embodiment settings. **A:** To the best of our knowledge, this is **the first work to consider unsupervised pre-training across different embodiments**. Consequently, when designing experimental settings, we hope to cover diverse kinds of cross-embodiment settings to fully validate algorithms in this setting, including different embodiment properties, different morphologies, different actions, and so on. In DMC, we mainly consider embodiments with different embodiment properties like mass and damping as this is **one of the most basic cross-embodiment settings** and is worth paying attention to especially when we first research CEURL. Besides these, there are also settings with **different morphologies** in **Appendix B.10**: **Walker-Cheetah**, of which the results show that PEAC can also handle embodiments with different morphologies. Following reviewers' valuable suggestions, we have also supplemented more cross-embodiments with **different morphologies** in DMC: - **Walker-Humanoid**: As suggested by Reviewer RG3H, we consider the cross-embodiment setting that includes Walker robots and Humanoid robots. Their robot properties, robot shapes, and action spaces are all different. - **Walker-length** and **Cheetah-torsolength** [1]: The former includes walker robots with different foot lengths while the second one includes cheetah robots with different torso lengths. Thus robots' properties and morphologies are different. The figures of these embodiments are in the attached PDF. The results in these settings are below: | Walker-Humanoid | stand-stand | stand-walk | stand-run | walk-stand | walk-walk | walk-run | run-stand | run-walk | run-run | average | |-|-|-|-|-|-|-|-|-|-|-| | DIAYN | 286.8 | 320.2 | 307.2 | 185.4 | 175.4 | 209.7 | 75.4 | 61.1 | 55.4 | 186.3 | | PEAC-DIAYN | 231.6 | 430.3 | 371.7 | 247.4 | 210.7 | 245.2 | 88.3 | 85.1 | 69.9 | 220.0 | | LBS | 477.8 | 463.2 | 475.7 | **407.1** | 343.7 | 401.1 | 150.8 | 112.3 | 134.6 | 329.6 | | PEAC-LBS | **486.4** | **484.9** | **483.6** | **406.9** | **373.6** | **432.5** | **180.4** | **145.7** | **188.2** | **353.6** | | Walker-length | stand | walk | run| flip | average | |-|-|-|-|-|-| | LBS | **977.5** | 909.1 | **557.5** | 630.4 | 768.6 | | PEAC-LBS | **970.0** | **976.4** | 544.2 | **764.9** | **813.9** | | Cheetah-torsolength | run | run_backward | flip | flip_backward | average | |-|-|-|-|-|-| | LBS | 625.9 | **512.9** | 611.9 | 523.0 | 568.4 | | PEAC-LBS| **745.3** | 499.7 | **646.4** | **649.2** | **635.2** | As shown in these tables, PEAC can achieve much greater performance compared to baselines. These experiments indicate that PEAC has powerful abilities to handle various kinds of embodiment differences, including different morphologies. Also, designing more effective methods for handling complicated embodiments like Humanoid is a promising future direction. Reference: [1] Learning Robust State Abstractions for Hidden-Parameter Block MDPs Pdf: /pdf/19480da9a573e7b4dcae5e92235d12939ddf84a4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Paths to Equilibrium in Games
Accept (spotlight)
Summary: The authors study and affirm the possibility of constructing a satisficing path to a Nash equilibrium, from any initial strategy profile, in n-player normal-form games. Satisficing paths generalize best-response paths in that a player that is not best responding is not restricted in its strategy update. Players that were previously best-responding maintain their strategy. The authors use their results in a discussion to suggest that future design of MARL algorithms incorporate exploration periods among strategies, especially when they are not best-responding. The authors go on to discuss their contributions in the contexts of Markov games, decentralized learning, and dynamical systems. Strengths: The paper is well-described and well thought-out. Satisficing improvements is an interesting concept and could be useful in MARL research. The main contribution (Theorem 1) seems to provide a useful conclusion about how MARL algorithms could interface with satisficing paths rather than the best-response concept. Good extension beyond the work of (51) (line 183). It's good that the proof is not immediate, especially with Lemma 1 in Appendix A. Good proof sketch on Line 189. Overall, I believe this paper would provide a fruitful conceptual addition to the MARL research discussion. Weaknesses: Remark 1 suggests the algorithm is centralized rather than decentralized and learning-based. Under Case 1 you say "fix an arbitrary Nash equilibrium $z^*$.'' How does one discover "z^*?" Isn't the point of using MARL algorithms to (1) use decentralized learning to discover a NE, or (2) identify that NE from a centralized perspective, which is PPAD-complete? I am confused how the centralized entity forms its recommendations for which strategies agents should play next. A similar confusion holds for Case 2: I could be reading this not clearly enough, but the proof appears to suggest ``make most agents unsatisfied then arbitrarily jump to a NE (subject to feasibility via the remaining satisficed agents).'' Could you please elaborate how this is not a trivial conclusion (line 367)? Technical Quality: 3 Clarity: 4 Questions for Authors: If there's enough space, could you elaborate on the open question left by (51) on line 73? It seems that the satisficing principle is significantly studied (line 55) but I don't have the best understanding for how or why this principle is a significant in the pre-existing literature. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your close reading of our paper and for sharing your concerns. We wish to primarily address the points of confusion identified in your review, but perhaps it would be helpful to first address your question on the connection between our work and the previous literature. Indeed, the satisficing principle has been widely employed in several previous works across game theory and MARL, such as those identified on Line 55. While the principle itself is natural and widely used, the graph theoretic structure was studied only indirectly by early proponents of satisficing algorithms (e.g. [19], [20]), and the terminology of satisficing was not introduced until later. Since this idea appeared inherently in earlier algorithms, it is somewhat well studied, with some interesting theorems proved incidentally while studying convergence properties of specific algorithms. On the other hand, since it was not formalized until later, some fundamental structural questions (such as the open question we answer) were not addressed head-on in those earlier works. Complementing and building on existing work, our theoretical study on the structure of satisficing paths may be interpreted as being upstream of the analysis of specific MARL algorithms. One aim of our work was to show that such paths to equilibrium exist, since this existence result is necessary for any satisficing-based MARL algorithm to be effective. That being said, it is important to note that the structure we study is separate from any individual satisficing-based MARL algorithm. In fact, the process described in the Proof of Theorem 1 is not an algorithm, but an existence proof described procedurally. Remark 1 cautions the reader that each step of this procedure is analytic rather than algorithmic. As an existence proof, justified non-constructive steps (e.g. the selection of an arbitrary Nash equilibrium z^{*} by the analyst, which is guaranteed to exist) are acceptable, and questions of complexity of such non-constructive steps are not applicable as it relates to an existence proof such as ours. Nevertheless, these questions are certainly relevant and important for downstream applications such as the complexity analysis of specific MARL algorithms. --- Rebuttal Comment 1.1: Title: Thank you for your response. Comment: Thank you for your response.
Summary: This paper explores the dynamics of strategy updates in MARL and game theory, focusing on sequences of strategies satisfying a pairwise constraint. This constraint requires that the agent best responding in one period does not switch its strategy in the next period but does not constrain the non-optimizing agents in any way. Sequences of strategy with this property are called satisficing paths. In this paper, authors have shown that every finite normal-form game enjoys the satisficing paths property. Overall, the paper provides a new perspective on achieving equilibrium in multi-agent systems and has significant implications for the design and analysis of MARL algorithms. Strengths: This paper studies a very interesting problem: is it always possible to construct a satisficing path that terminates at equilibrium for a given game and initial strategy profile? The main strengths of this paper are: 1. The introduction of satisficing paths as a generalization of the best response path is both novel and insightful. 2. The authors provide rigorous proof that any finite normal-form game has the satisficing paths property. This proof is well-structured and addresses an open question in the field. 3. The paper’s finding that reward-deteriorating strategic updates can drive play to equilibrium is counterintuitive yet compelling. 4. The practical implications for the design of MARL algorithms are articulated. The paper provides concrete suggestions on how to incorporate satisficing updates and exploratory strategies, which could improve convergence properties and performance in a wider range of games. Weaknesses: 1. Although this paper provides extensive theoretical results, it lacks empirical validation. Including experimental results or simulations, even a simple example, to demonstrate the practical applicability and performance of satisficing paths in real-world scenarios would strengthen the paper’s impact. 2. While the theoretical contributions are significant, the proof techniques used are complex and may not be easily accessible to all readers. Providing more intuitive explanations could make the results more understandable to a broader audience. Although it is not a large weakness, it would improve the readability if it can be resolved. 3. The paper offers valuable algorithmic insights but falls short of providing detailed guidance on implementing these insights in practical MARL algorithms. Including specific case studies would make the theoretical contributions more actionable for practitioners. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. This paper provides extensive theoretical results and these results are compelling, have authors considered validating findings through empirical experiments? I am wondering how satisficing paths perform in practical multi-agent reinforcement learning scenarios. 2. Authors have proved that any finite normal-form games have the satisficing paths property. How do you envision the scalability of your approach to larger, more complex game settings? Are there specific scalability challenges that need to be addressed? 3. Here, I just wondering have authors considered exploring alternative pathways or strategies for reaching equilibrium. If yes, how do these alternatives compare to satisficing paths in terms of convergence speed and robustness? Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes, the authors provided limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your kind words and for your questions. Efficiency and practical performance are important considerations for work on MARL. Our theoretical study here is somewhat orthogonal to efficiency, and we do not take a position on the exploratory ("lose-shift") mechanism used by any particular satisficing-based MARL algorithm, but we can say a few things about the connections between this theory and efficiency. 1. Satisficing paths can be produced as the output of many different MARL algorithms. In fact, they can be produced by any algorithm that respects the "win-stay" condition of keeping an old strategy whenever it is a best response to the last period's strategy profile. This means a wide class of algorithms are available to us for simulation purposes, and a given algorithm may be highly effective (or highly ineffective) for a given game. As such, we believe that simulations of a particular MARL algorithm in a specific game may not be a meaningful representation of the theoretical findings of this paper, on the existence of paths. 2. In [20], the authors studied a particular satisficing algorithm that relied purely on exploration to drive play Nash equilibria. The authors of that work offered some remarks on how their algorithm scales with the number of agents and the size of their action sets. In the case of *purely* randomized exploration when players are unsatisfied, the authors observed that their algorithm scales poorly and inefficiently. Their observations underline the fact that a well-selected strategy update function is needed to ensure good performance in larger, more complex games. 3. We have considered a spectrum of pathways for reaching equilibrium, with one end of the spectrum being pure best response dynamics (possibly with inertia) and the other end being purely exploratory strategy updates (when players are unsatisfied). In terms of convergence speed, dynamics based on best responding without exploration can be very quick in some settings, such as potential games, but can also fail to converge in other settings, such as zero-sum games. Thus, algorithms based on exploration with the satisficing principle tend to be more robust, if slower, than algorithms on the other end of the spectrum. Of course, there are other paradigms for algorithm design outside of this spectrum, which we believe are also very interesting. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. I keep the score.
Summary: The paper answers the following question affirmatively: "For an arbitrary n-player normal-form game and an arbitrary initial strategy profile x1, is it always possible to construct a satisficing path from x1 to a Nash equilibrium of the game?". Strengths: The presentation of the paper is excellent, in particular it is very clear which research question is tackled by the authors. The proof of the main result (including lemmas) is not trivial and offers interesting elements. Weaknesses: I have one main complaint, and this is related to my disagreement with the following sentence at the beginning of section 3.2: ``Theorem 1 shows that play will be driven along satisficing paths to an equilibrium".'' I disagree, because to have the guarantee that it will be driven to an equilibrium, you still need to find a way to get all unsatisfied players to the equilibrium, and this is not a trivial task at all. What your theorem 1 shows is the existence of such path to equilibrium. However, you construct this path in a very specific way, and even in the case where all players are unsatisfied (that is your Case 1, page 6), you need to find a way to transition from this situation of all-unsatisfied-players to an equilibrium (Case 1, page 6 assumes that you know such an equilibrium). And finding this is not easy if you don't know an equilibrium a priori. In other words: in large games, it may occur that almost every path is satisficing because almost no player will be exactly satisfied. This makes me wonder about the usefulness of the concept of satisficing path. The idea of weakening best-response paths where all players best-respond is interesting, but the current definition of satisficing path may be too weak. To summarize: the paper presents an existence result, but it would be much stronger and more meaningful if authors presented an algorithm to reach Nash equilibria along satisficing paths without knowing the Nash equilibrium a priori. Technical Quality: 4 Clarity: 4 Questions for Authors: cf. my comments in the "Weaknesses" section. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your engaged reading and review of our paper! Our aim with the original wording at the beginning of Section 3.2 was to contrast approaches based on satisficing with approaches based (for instance) on best responding, where in the latter it is possible that no path to equilibrium exists from a particular starting point. In the revised version, Section 3.2 will begin with the following: "When coupled with a MARL algorithm that uses an exploratory satisficing strategy update, play will be driven along satisficing paths. Theorem 1 shows that for any starting strategy profile, some such path connects the strategy profile to an equilibrium, and so a sufficiently exploratory strategy update may drive play to equilibrium along a satisficing path." With regards to a point raised in your summary, we wish to note that algorithms based (implicitly) on the satisficing principle with sufficient exploratory search when unsatisfied were proposed in references [19] and [20]. Those algorithms were restricted to special classes of games less general than those considered here (namely, two-player games in [19] and games satisfying regularity conditions in [20]). Those algorithms discretize each player's strategy set and select new strategies from this discrete subset. The satisfaction condition used in these papers replaced best responding by approximate best responding, and the authors showed that, in their specific subclasses of games, MARL using discretized satisficing leads to approximate equilibrium with arbitrarily high probability. One of the objectives of our paper was to show that such algorithms actually have theoretical basis beyond the settings considered in [19] and [20]. We believe Theorem 1 serves this objective and offers an interesting perspective on its own. As a topic of future work, we agree that it would be interesting to study how the existence of paths interfaces with discretization and approximate best responding, which were jointly needed for the algorithmic guarantees of [19] and [20]. --- Rebuttal 2: Comment: I have read the authors' rebuttal and keep my score unchanged.
Summary: The paper delves into the strategic dynamics of MARL, focusing on the evolution of strategy profiles among agents. It introduces satisficing paths, a sequence of strategies where an agent that is best responding in one period does not switch strategies in the next, which allows for exploration in optimization. The central question addressed is whether, for any given game and initial strategy profile, it's always possible to construct a satisficing path that leads to Nash Equilibrium. The paper provides a positive answer for normal-form games, suggesting that reward-deteriorating strategic updates can drive play towards Nash equilibrium. The analysis has implications for MARL algorithms, which offers new insights into algorithm design that could avoid common drawbacks like cyclical behaviour. Strengths: 1. The paper provides a significant theoretical result regarding the existence of satisficing paths in normal-form games. 2. It offers a novel perspective on how suboptimal strategic updates can aid convergence to equilibrium, contrary to typical reward-improving approaches. 3. The findings have clear implications for MARL algorithm design, suggesting potential modifications to enhance convergence properties. 4. The results are not limited to two-player games but extend to n-player general-sum normal-form games. Weaknesses: 1. The paper is theoretical, and it's unclear how the insights would translate into practical MARL algorithms without empirical validation. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How can the theoretical results be practically implemented in MARL algorithms? 2. Are there any known limitations or exceptions where satisficing paths might not lead to equilibrium? 3. Can the concepts presented be extended to continuous action spaces or other game types? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See above comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Our theoretical results have some practical consequences for MARL algorithms. First, they can be used to justify and analyze existing algorithms (such as those of references [19], [20], [33], and others) beyond the narrower classes of games for which they were designed. Second, our results inform algorithm design in MARL to break cycles, as described in Lines 298-308 of our paper. In particular, one can combine random search when unsatisfied with cycle-breaking random search (or otherwise selective, non-best-responding strategic updates). These added ingredients will influence the trajectory of strategy iterates to follow satisficing paths, which we have shown have the potential to lead to equilibrium even in cases where the smaller set of best-response paths do not lead to equilibrium. 2. Thank you for this important question. Yes indeed, satisficing paths to equilibrium do not exist in all game theoretic settings, and negative results do exist. In this paper, we show that *mixed extensions* of finite, normal-form games (where strategies can be probability distributions over a set of actions) always admit at least one satisficing path terminating at equilibrium from any initial strategy profile. On the other hand, if strategies are not allowed to be randomized, then one can produce examples of games with a pure Nash equilibrium and an initial strategy profile that is not connected to the Nash equilibria by a satisficing path. 3. Absolutely, the definitions of satisficing can be extended, but the proof techniques for Theorem 1 may not carry over automatically. Some details of our proofs rely on linear programming formulations where the feasible set is an n-dimensional simplex and so admits a finite number of extreme points. Our proof subsequently took advantage of this fact. When one considers continuous action spaces, the number of extreme points may not be finite, and this loss of finiteness may prevent our subsequent analysis. Although our proof technique may not immediately carry over in this more general setting, we believe similar ideas can be carried out in a generalization of this work. --- Rebuttal Comment 1.1: Title: Thank you for your response. Comment: Thank you for your response. I keep the score.
Rebuttal 1: Rebuttal: Dear AC and reviewers, We would like to thank you all for your time and effort in reading and reviewing our paper, and we would like to thank the reviewers for their thoughtful input and positive evaluations. For your convenience, we have responded to each reviewer's questions separately below. Kind regards, The Authors.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning 1D Causal Visual Representation with De-focus Attention Networks
Accept (poster)
Summary: This paper mainly improves the existing 1D causal models when handling visual inputs, generally 2D non-causal. The key insight of the authors mainly comes from Figure 1: the 1D causal models will over-focus to a few tokens instead of capturing the rich information from the whole image. To fix this issue, the authors propose the de-focus network. They first view the spatial decay and positional embedding as bandpass filters and learn that such filters lead to the capture of diverse information. In addition, the improvement in optimization, especially a significantly larger dropout, also encourages the model to make use of diverse sources of information. Finally, the authors experiment on image classification, object detection, and image-text training to prove the effectiveness of their approach. Strengths: 1. In Figure 1 and Sec. 4.1, the authors make a very good demonstration and reasoning of their intuitions. 2. The De-focus network is simple and reasonable, including both the bandpass filter part and the optimization improvement. 3. The experiments across a wide range of fundamental tasks prove the effectiveness of the De-focus network. Weaknesses: I think the paper is good overall. I only have one question regarding the generalization of the method, which is not discussed in the paper. Transformer-based architectures like ViT in image classification and CLIP show certain generalizations to different resolutions. For example, people can train a model on 224x224 and inference on 336x336 by interpolating the positional embeddings. I am curious how well the de-focus network will perform under such generalization scenarios, compared with the 2D non-causal transformer-based architectures. Technical Quality: 4 Clarity: 4 Questions for Authors: See weakness. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes, the authors have addressed such limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer WAfj for the constructive suggestions. Please see our detailed response below. --- > **Q1: Transformer-based architectures like ViT in image classification and CLIP show certain generalizations to different resolutions. For example, people can train a model on 224x224 and inference on 336x336 by interpolating the positional embeddings. I am curious how well the de-focus network will perform under such generalization scenarios, compared with the 2D non-causal transformer-based architectures.** A1: Thanks for your suggestion. We test the transfer performance under resolution 336 and report the results in the table below. | | Resolution 224 (same as training) | Resolution 336 | |-------|-------|-------| | DeiT-Base | 81.8 | 81.6 | | De-focus Mamba-Base | 82.0 | 81.6 | The results demonstrate that our De-focus Networks can also transfer to different resolutions effectively. --- Rebuttal 2: Comment: Thanks to the authors for the rebuttal! I have also checked the other reviewers' questions and the author's rebuttal. I maintain my current scores for now.
Summary: This paper addresses the issue of over-focus in vision models by proposing strategies of using large and scheduled drop path rates and an auxiliary loss on globally pooled features. These strategies aim to encourage the model to focus on a wider range of tokens and improve network optimization. The paper is logically structured, the methods are novel, and the experimental results validate the effectiveness of the proposed approach. Strengths: 1. The paper is well-organized and easy to follow. 2. The authors have an insightful perspective on existing vision models. 3. The research perspective of this paper is interesting and will be inspiring for future studies. Weaknesses: I am confused about some of the author's viewpoints. For example, in the introduction, the author mentioned that the causal modeling methods explored in existing vision models give them causal reasoning capabilities. 1. Why do vision models based on standard Transformers without adding a causal attention mask still possess some level of causal reasoning abilities? 2. Can we consider that vision Transformers inherently have causal abilities, and that existing improved methods simply amplify these models' causal capabilities? Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Can using large spatial dropout/dropblock or randomly dropping masked image tokens achieve similar effects compared to using a Large Drop Path Rate? If these methods are feasible, could additional experimental results comparisons be included? 2. The authors need to provide further explanation on the rationale behind the positioning of learnable decay and learnable RoPE in the attention mechanism. Besides, can we directly deploy them on the feature maps before the attention mechanism? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: 1. The authors compared the parameter counts between models in terms of efficiency, but parameter count alone may not fully reflect model efficiency. I suggest the authors consider adding metrics related to model FLOPs and latency in Table 1 and Table 2. 2. Adding a classification loss to the model is not a novel method. 3. The ablation study lacks experimental results comparing with other causal methods. 4. There is a small grammatical error. In the paragraph on line 38, the phrase "On one hand xxx, On the other hand xxx" typically indicates a contrasting situation, but in this context, the authors may not intend to convey such a situation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer J7C5's feedback. It should first be clarified that the reviewer seems to have confused the concept of causal reasoning with 1D causal modeling in our paper. This may have led to some misunderstandings, such as the mistaken belief that we did not compare with other causal modeling methods in the experiments. And other concerns, like additional experiments with dropout, do not constitute valid reasons for rejection. Detailed responses are provided below. --- > **Q1: Why do vision models based on standard Transformers without adding a causal attention mask still possess some level of causal reasoning abilities? Can we consider that vision Transformers inherently have causal abilities, and that existing improved methods simply amplify these models' causal capabilities?** A1: Our paper does not involve the concept of causal reasoning. There is a misunderstanding: causal modeling [1,2] does not refer to causal reasoning, it means that image patches are treated as time series like language, and the patches at the front of the sequence are used to predict subsequent patches. Whether current vision models possess causal reasoning abilities is orthogonal to our paper. We will add the definition of causal modeling in the introduction. --- > **Q2: Can using large spatial dropout/dropblock or randomly dropping masked image tokens achieve similar effects compared to using a Large Drop Path Rate? If these methods are feasible, could additional experimental results comparisons be included?** A2: Thanks for your suggestion. It should be first noted that while these attempts are interesting, they do not diminish the contributions of our paper. We have conducted additional experiments and the results are provided in the table below. It's shown that using large spatial dropout or dropping masked image tokens is inferior to using a large drop path rate. | | ImageNet Top1 Acc | |-------|-------| | Large Drop Path Rate | 82.0 | | Large Spatial Dropout | 80.9 | | Drop Masked Image Tokens | 80.2 | --- > **Q3: (a) The authors need to provide further explanation on the rationale behind the positioning of learnable decay and learnable RoPE in the attention mechanism. (b) Besides, can we directly deploy them on the feature maps before the attention mechanism?** A3: (a) Please refer to L38-L42 in the introduction and L127-L130 in Section 4.1 for the reason of using learnable decay and RoPE in the attention mechanism. Specifically, the motivation stems from our observation of the "over-focus" issue. Learnable decay helps increase or decrease the network's emphasis on distant data. Learnable RoPE allows the network to focus on different aspects of data within various spatial frequencies. By employing multiple settings for both decay and RoPE, the network can create a more balanced and varied pattern of attention. (b) Learnable decay and learnable RoPE can not be applied to the feature maps, because the relative positions of different tokens should be taken into account. --- > **Q4: The authors compared the parameter counts between models in terms of efficiency, but parameter count alone may not fully reflect model efficiency. I suggest the authors consider adding metrics related to model FLOPs and latency in Table 1 and Table 2.** A4: It should be noted that efficiency is not the focus of our paper. As in many other papers, parameter counts are reported to ensure a fair comparison between models of similar size, which is not directly related to efficiency. To address the concern, we test the FLOPs and latency of our model, and our "de-focus attention" strategy adds less than 1% to both the model's FLOPs and latency. --- > **Q5: Adding a classification loss to the model is not a novel method.** A5: The novelty of our paper includes the observation of the "over-focus" issue in training causal models, the proposal of the "de-focus attention" strategy, as well as the demonstration that 1D causal models can deliver comparable performances to non-causal models (as noted by the summary of Reviewer WFuU, Reviewer 4dxw and Reviewer WAfJ). It should be emphasized that the use of auxiliary loss is only a part of our overall "de-focus attention" strategy. --- > **Q6: The ablation study lacks experimental results comparing with other causal methods.** A6: We have compared with other causal methods in the main results, including RetNet[3], Mamba[4], Mamba-ND[5]. Please refer to Table 1 in the paper. --- > **Q7: There is a small grammatical error. In the paragraph on line 38, the phrase "On one hand xxx, On the other hand xxx" typically indicates a contrasting situation, but in this context, the authors may not intend to convey such a situation.** A7: Thanks for your advice, we shall fix it in the final version. --- [1] Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung et al. UL2: Unifying Language Learning Paradigms. In The Eleventh International Conference on Learning Representations, 2023. [2] Erik Nijkamp, Hiroaki Hayashi, Caiming Xiong, Silvio Savarese, and Yingbo Zhou. Codegen2: Lessons for training llms on programming and natural languages. arXiv preprint arXiv:2305.02309, 2023. [3] Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. Retentive network: A successor to transformer for large language models. arXiv preprint arXiv:2307.08621, 2023. [4] Albert Gu, and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. [5] Shufan Li, Harkanwar Singh, and Aditya Grover. Mamba-nd: Selective state space modeling for multi-dimensional data. arXiv preprint arXiv:2402.05892, 2024. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I have thoroughly read the authors' responses and revisited the paper. Thank you for the detailed answers. However, I still have some concerns regarding Q1 and Q2. 1. For Transformer models that do not include adding a causal attention mask, how do they possess the ability for causal modeling? Furthermore, based on the current definition of "causal modeling," can we consider CNNs as a form of causal modeling as well? 2. Building on the last question, if existing models already inherently possess causal modeling capabilities, would the proposed method be considered merely an enhancement? 3. The operational mechanism of the Large Drop Path Rate seems similar to large spatial dropout/dropblock or randomly dropping masked image tokens. Why is it that the Large Drop Path Rate can achieve more advantageous results? What are the reasons that lead to its superior performance? Observation from the table in your reply. Many thanks. --- Rebuttal 2: Comment: Thanks for your reply! We appreciate your time and efforts to read our responses and paper. Please see our new response below. --- > **Q1: For Transformer models that do not include adding a causal attention mask, how do they possess the ability for causal modeling? Furthermore, based on the current definition of "causal modeling," can we consider CNNs as a form of causal modeling as well?** A1: It should be first clarified that standard Transformers or CNNs can not learn to perform causal modeling without a causal attention mask. In mathematical terms, given an input token sequence $x_1, ..., x_n$, causal modeling aims to maximize the likelihood of the correct next token given previous tokens in the sequence, which can be formulated as $$\max_\theta\sum_{i=1}^{n-1}\log p(x_{i+1}|x_1, ..., x_i;\theta),$$ where $\theta$ refers to the parameter set of the model. For Transformers and CNNs, a causal mask is necessary to restrict access to future tokens. Without a causal mask, Transformers can access the entire input sequence, i.e., they will use $x_1, ..., x_n$ as the probability condition rather than $x_1, ..., x_i$. Similarly, CNNs can access neighborhood tokens in the local window, i.e., they will use $x_{i-w}, ..., x_{i+w}$ as the probability condition, where $w$ is the local window width. These conditions both include the next token, which leads to information leakage. Therefore, standard Transformers and CNNs will rely on such shortcuts, failing to learn to perform causal modeling effectively. --- > **Q2: Building on the last question, if existing models already inherently possess causal modeling capabilities, would the proposed method be considered merely an enhancement?** A2: Based on the reasons mentioned above, existing models do not inherently have causal modeling abilities, and our De-focus Attention Networks demonstrate that causal models equipped with the "de-focus attention" strategy can achieve comparable performances with non-causal models. --- > **Q3: The operational mechanism of the Large Drop Path Rate seems similar to large spatial dropout/dropblock or randomly dropping masked image tokens. Why is it that the Large Drop Path Rate can achieve more advantageous results? What are the reasons that lead to its superior performance? Observation from the table in your reply.** A3: Large spatial dropout or dropping masked image tokens could theoretically introduce some regularization effects. However, there are nuanced differences in their mechanisms and impacts on model training compared to large drop path rate. - Large spatial dropout only drops portions of the tokens, so some tokens can still access information from neighboring tokens. Therefore, the network can still learn to focus on neighboring tokens and rely on depth to gather information. - Dropping masked image tokens completely removes some input tokens, so the model may fail to learn to pay attention to tokens that are distant in positions. As a result, it does not prevent the model from leveraging depth to learn representations. Its regularization effect is thus not significant. --- We hope that our response can address your concerns, and any further discussion is welcomed.
Summary: The paper addresses the challenges of using 1D causal modeling for images, which traditionally require 2D non-causal modeling due to inherent modality differences between vision and language models. It identifies a significant issue in existing 1D causal vision models termed "over-focus," where the model's attention is overly concentrated on a few visual tokens, hindering diverse feature extraction and optimization. To combat this, the paper introduces De-focus Attention Networks that use learnable bandpass filters to diversify attention patterns and incorporate strategies like large drop path rates and an auxiliary loss on globally pooled features to broaden token attention and enhance model optimization. Experiments on several image underst show that these innovations enable 1D causal visual representation to achieve performance comparable to 2D non-causal models in various tasks, including global perception and multi-modal understanding. Strengths: + Identifying over-focus issue in 1D causal visual modeling is an interesting and provide a starting point for the work in thissubmission. + A "de-focus attention" strategy is introducted to address this issue. It incorporates learnable bandpass filters into the existing attention mechanisms to generate diverse attention patterns. The proposed idea is resonable. A high drop path probability and an auxiliary loss is also proposed for better optimization. + Experiments on seval visual understanding benchmarks show that via the proposed De-focus Attention Networks, 1D causal visual representation can match the performance of 2D non-causal representation in tasks involving global perception, dense prediction, and multi-modal understanding. Weaknesses: - For CLIP experiments, it is helpful to also report comparisons for cross-modal retrival on MSCOCO to follow existing routines of CLIP model comparisons. - The major motivation is about addressing challenges in constructing unified multi-modal model, while the main experments are about image understanding tasks. To this point, seems discussions/explorations about impact of proposed method on MLLM is missing. I'm also curious about its potential applications for image generation models. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to details in Weakness part above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Some limitations of the work have been discussed in the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 4dxw for the thoughtful review and the insights provided. We appreciate the opportunity to discuss the enhancements and implications of our research further. --- > **Q1: For CLIP experiments, it is helpful to also report comparisons for cross-modal retrival on MSCOCO to follow existing routines of CLIP model comparisons.** A1: In response to your comments, we have conducted additional experiments using the CLIP model on the MSCOCO dataset. These new results further reinforce the findings presented in our initial submission, emphasizing the robustness and applicability of our approach across different data sets. | Model | Causal | Image Retrieval Recall@1 | Image Retrieval Recall@5 | Image Retrieval Recall@10 | Text Retrieval Recall@1 | Text Retrieval Recall@5 | Text Retrieval Recall@10 | |-------|-------|-------|-------|-------|-------|-------|-------| | OpenAI CLIP-Base/32 | No | 30.4 | 55.0 | 65.7 | 49.2 | 73.4 | 82.4 | | OpenCLIP-Base/32 | No | 35.3 | 61.0 | 71.8 | 52.5 | 77.0 | 84.9 | | De-focus Mamba-Base/32 | Yes | 34.6 | 60.3 | 71.2 | 51.7 | 76.3 | 84.8 | --- > **Q2: The major motivation is about addressing challenges in constructing unified multi-modal model, while the main experiments are about image understanding tasks. To this point, seems discussions/explorations about impact of proposed method on MLLM is missing.** A2: Thank you for your suggestions. In our research, we mainly study the application of causal modeling to images and achieve competitive performance. These results show a promising path towards a unified framework for joint causal modeling of images and text. However, the integration of these modalities remains a considerable challenge. Models such as Fuyu-8B[1] and Chameleon[2] that attempt to fuse image and text modeling are examples in this regard. Unfortunately, the details of these models and their training process are not publicly available, and they have not yet achieved the performance level of state-of-the-art MLLMs. This limitation underscores the crucial need for ongoing research in this domain to bridge the existing gap. --- > **Q3: I'm also curious about its potential applications for image generation models.** A3: Recent preliminary studies, such as "Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation"[3], have begun to uncover the potential of causal modeling in image generation. While our work primarily focuses on perception and understanding, these two works together indicate that causal modeling is a viable alternative to traditional methods across various areas of AI research. --- [1] Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Sağnak Taşırlar. Introducing our multimodal models, 2023. [2] Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818, 2024. [3] Sun, Peize, et al. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation. arXiv preprint arXiv:2406.06525, 2024.
Summary: The paper explores the feasibility of representing images with 1D causal modeling in unified multi-modal vision and language models, addressing the "over-focus" issue in existing models by proposing De-focus Attention Networks (DANs) with learnable bandpass filters and enhanced training strategies. Extensive experiments show that 1D causal visual representation can perform comparably to 2D non-causal representation in global perception, dense prediction, and multi-modal understanding tasks, with code to be released. Strengths: 1. The proposal to use 1D causal modeling for images is a novel approach that challenges the traditional 2D non-causal representation, opening new avenues for unified multi-modal models. 2. The use of large and scheduled drop path rates and an auxiliary loss on globally pooled features are effective training strategies that promote broader attention and better network optimization. Weaknesses: 1. The evaluation of the proposed method is inconsistent, e.g., in table 1, the performance of Base ViT model is reported, but Small and Large model are missing. In table 2 and 3, only Mamba model results are compared. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the weakness section. Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Limitations have been discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate Reviewer WFuU for the comments, yet we must emphasize that the weakness Reviewer WFuU has identified is not a sufficient reason for rejection. Our experiments based on Mamba have demonstrated that 1D visual causal modeling can achieve comparable performance to non-causal models. Other results based on ViT are included primarily to supplement this evidence. Their performance, whether superior or not, does not compromise the soundness of the arguments or the validity of the conclusions drawn. --- > **Q1: The evaluation of the proposed method is inconsistent, e.g., in table 1, the performance of Base ViT model is reported, but Small and Large model are missing. In table 2 and 3, only Mamba model results are compared.** A1: The aim of our research is to assess the feasibility and performance potential of causal modeling, a goal we have demonstrated by using the Mamba-based model. The structure of specific models, such as the ViT-based model mentioned in the paper, serves merely to provide additional evidence, further proving that our method works well and can be applied generally. Due to computational constraints, we conducted only the most essential experiments and presented these in the paper. Addressing your concerns, we have also conducted supplementary experiments employing the ViT model: - We began by training the De-focus ViT-Small model on the ImageNet dataset. The results of this experiment are depicted in the following table. | | Causal | ImageNet Top1 Acc | |-----|----|----| | (DeiT) ViT-Small | No | 79.9 | | De-focus ViT-Small | Yes | 79.6 | | De-focus Mamba-Small | Yes | 80.3 | - Additionally, we conducted an experiment in a detection task setting. We did not change the existing detection architecture but instead utilized the ViT-base model listed in Table 1 as the pretrained model. | | Causal | Epoch | AP | AP50 | AP75 | |-----|----|----|----|----|----| | (DeiT) ViT-Base | No | 12 | 49.1 | 69.9 | 52.7 | | De-focus ViT-Base | Yes | 12 | 48.9 | 67.1 | 53.3 | | De-focus Mamba-Base | Yes | 12 | 50.8 | 68.9 | 55.2 | These additional experiments highlight the adaptability of our approach. Nonetheless, the absence of these experiments from the initial submission does not weaken our main points or lessen the reliability of the conclusions we made. ---
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Framework for Bilevel Optimization on Riemannian Manifolds
Accept (poster)
Summary: This paper studies Riemannian bilevel optimization, where variables of both lower and upper level problems are constrained on Riemannian manifolds. The authors propose several hypergradient based algorithms via Neumann series and automatic differentiation. Convergence analysis is provided for the proposed approaches. Experiments on synthetic problems, hyper-representation learning, meta-learning, unsupervised domain adaptation are provided to demonstrate the effectiveness of the proposed methods. Strengths: 1. The studied topic is of interest to bilevel optimization research. As far as I know, this should be the first work to study this type of research. 2. The analysis is comprehensive. The algorithms cover AID and ITD based hypergradient approximations. 3. Quite a few experiments are provided to support the theory. Weaknesses: 1. More motivation examples should be provided to validate the importance of bilevel optimization on manifolds. In the experiments, for example, in manifold meta-learning, it would be good to compare the method with meta-learning methods without the orthogonality constraint. 2. Assumption 1 may be quite strong. It requires all iterates z_1,…, are bounded in a compact space. The analysis in previous standard bilevel optimization does not require this assumption. 3. Main results contain complex constants and are a little bit hard to parse. Some simplifications may be helpful here. 4. The extension in 3.3 will be more interesting if Hessian-vector rather than Hessian-inverse is considered. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness for information. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weakness for information. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the contributions of our work in terms of comprehensive analysis and supportive experiments, as well as providing the constructive comments. **1. (W1) More examples to validate the importance of bilevel optimization on manifolds.** Thank you for the suggestion. We believe we have provided sufficient motivating applications, showcasing the importance of bilevel optimization on manifolds. One such example is in Section 4.4 that motivates the use of bilevel optimization formulation for unsupervised domain adaptation (under the optimal transport framework). As far as we know, this is a novel application of bilevel optimization to domain adaptation, where we learn a whitening matrix $\textbf{M}$ in the lower-level problem. Our numerical experiments show that employing Riemannian bilevel optimization yields more suitable adaptation by accounting for the metric structure (through whitening) in unsupervised domain adaptation. **2. (W2) Assumption 1 is strong and is non-standard in previous bilevel optimization.** Standard bilevel optimization in the Euclidean space does not require this assumption. However, for optimization on manifolds, especially when dealing with (geodesic) strongly convex objectives, such an assumption is standard and often unavoidable. See for example [21, 37, 62, 63, 64]. This assumption is to ensure bounded curvature, thus allowing to show fast convergence under strong convexity. **3. (W3) Simplification regarding complex constants.** Thank you for the suggestion. We will simplify the notations in our revision. **4. (W4) Extend Hessian-inverse to Hessian-vector in Section 3.3.** We agree that a full-spectrum analysis covering other hypergradient estimators (in particular the ones using Hessian-vector products) would be interesting. The analysis techniques would be similar to what we already did for Hessian inverse. In order to make the contents concise, we had decided to omit the analysis for other hypergradient estimation strategies. In the revised version, however, we would include a discussion in this regard. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. My concerns are addressed. I keep my score. --- Rebuttal 2: Comment: On Assumption 1: I agreed with the concerns raised by Reviewer 4rfq regarding Assumption 1, which is typically a strong assumption in bilevel optimization. As I am not very familiar with the literature on manifold optimization, I am not very confident in validating the correctness of this assumption. Hope the authors can provide more evidence here. I found several related works on bilevel optimization on manifolds, e.g., [1][2]. I wonder whether similar assumptions are also made there. [1] Riemannian Bilevel Optimization, Li and Ma, 2024. [2] Riemannian Bilevel Optimization, Sanchayan Dutta, Xiang Cheng, Suvrit Sra, 2024. --- Rebuttal Comment 2.1: Comment: Thank you for your further comments. We kindly refer you to our responses to Reviewer 4rfq regarding clarifications on Assumption 1. In summary, we have revised Assumption 1 to only require domain compactness and unique geodesic for the lower-level problem. Such an assumption is unavoidable in the Riemannian optimization literature. Regarding your question on whether similar assumptions are used in [1,2], **yes, both [1,2] use assumption on the bounded domain.** Specifically: - [1] states explicitly in Assumption 4.2 that $\tau(\iota, \text{dist}(y^{k,t}, y^*(x^k)))$ is bounded by $\tau$ for all $k, t$. Because by the definition of $\tau(\iota, c) = \frac{\sqrt{|\iota| } c}{\tanh(\sqrt{|\iota|}c)}$, $\tau(\iota, c) $ is an increasing function over $\iota, c$. Thus bounded $\tau(\iota, c)$ suggests a bounded domain, i.e., $\text{dist}(y^{k,t}, y^*(x^k)) \leq D$ for all $k, t$ for some constant $D$. - [2] also requires such an assumption, which can be seen from Page 30 where they also require a constant upper bound on $\tau(\kappa, d_{\mathcal{N}} (z_k^{(t)}, y_k^*) )$ for all $k, t$, which implies a bounded domain. -------------------------------------------------------- We hope this addresses your concern, and we will incorporate additional discussions in our revised paper. As the discussion period deadline approaches, we would greatly appreciate your acknowledgment of our responses if there are no further questions. Thank you again for your time. [1] Riemannian Bilevel Optimization, Li and Ma, 2024. [2] Riemannian Bilevel Optimization, Sanchayan Dutta, Xiang Cheng, Suvrit Sra, 2024.
Summary: The paper proposes an RHGD algorithm (Algo 1) to solve the bilevel optimization problems (line 11). - Thm 1 proves the convergence to a stationary point of $F$ when using different approximations of the hypergradient. - Thm 2 shows the convergence under stochastic setting. - Thm 3 proves the convergence with cheaper retraction. Both synthetic numerical tests and applications to real problem are provided. Strengths: The paper is well-written. The theoretical results are solid and the contributions are new to me. The algorithm is fully implementable. What I like is the convergence is not only established for the Hessian inverse but also its cheaper approximation. The synthetic experiment is carefully designed. Weaknesses: 1. Assumption 1 is too strong: it requires the trajectory of $z$ to be in the unique geodesic neighborhood of $z^*$. 1. Assumption 2 is slightly too strong, does this mean many simple functions like quadratic functions: $f(x, y)=|x|^2+|y|^2$ do not satisfy this assumption? Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Could you please clarify the synthetic problem in Sec 4.1 satisfies all the assumptions? I only find that Assumption 3 is verified. Assumption 2 is automatically satisfied due to the compactness of the Stiefel manifold. However, could you clarify how to verify Assumption 1? 1. Maybe it is a dumb question, but in Sec. 3.1, why HINV and AD are different? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Small typo: In line 112 there are 2 negative signs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback on our work, particularly acknowledging that our paper is well-written with solid theoretical results. We also greatly appreciate the constructive comments. **1. (W1) Assumption 1 is strong.** We would like to highlight that Assumption 1 is often unavoidable for analyzing Riemannian optimization algorithms, especially when dealing with geodesic strongly convex functions, see for example [21, 37, 62, 63, 64] that also makes use of such an assumption. This assumption provides bound for the curvature, which is crucial for deploying a trigonometry distance bound (Lemma 3) and achieving linear convergence. This assumption can be satisfied for compact manifolds and by restricting the domain of interest. **2. (W2) Assumption 2 is strong.** Assumption 2 is not strong and can be inferred from Assumption 1 where we consider a compact domain. In this regard, the quadratic function you provided *satisfies* such an assumption when the domain is bounded. **3. (Q1) Verify the synthetic problem in Sec 4.1 satisfies Assumption 1.** Thank you for the question. Assumption 1 on the bounded domain can be satisfied directly for the Stiefel manifold. For the SPD manifold, we can take the maximal domain encompassing all the iterates considered. **4. (Q2) Why HINV and AD are different.** HINV computes the analytic expression of hypergradient using the Hessian inverse, which does not depend on the inner iterations. However, AD computes an estimate of hypergradient by differentiating through the inner iterations. --- Rebuttal Comment 1.1: Comment: Dear authors I appreciate your detailed rebuttal and I am so sorry for replying a little late. W1: I don't agree assumption 1 is an 'unavoidable requirement'. In the rebuttal, the author gives some examples of papers using the same condition, and I will take one of them [63] as an example. They require the manifold to be Hadamard, i.e., unique geodesics (I understand you must require unique geodesic for convex function to exist). However, I don't see any requirement for the iteration to be in a compact set in that paper. Could you please point out where is ' All iterates are contained in a compact neighbourhood' in that specific paper? The rebuttal also states 'This assumption can be satisfied for compact manifolds and by restricting the domain of interest.' I don't think the problem is that trivial because 1. compactness manifolds with no boundary (e.g., Stiefel manifold) leads to the non-existence of convex functions and non-existance of unique geodesic. 2. if you restrict the domain, how can you guarantee the iteration stays in that domain? This is the reason I think the assumption is too strong. W2: I agree with you it can be inferred from assumption 1. But again, assumption 1 is too strong, making me think assumption 2 is too strong. But I can take assumption 2 if you convince me assumption 1 is not strong. Q1: I am so sorry I do not think my question is that trivial. For Stiefel manifold, compactness in Assumption 1 is automatically satisfied, but where is unique geodesic (smooth inverse of exponential)? For SPD manifold, it is not that simple. What you really do is, fix a domain -> fine a step size as stated in Thm. 2, i.e., step size depends on the selection of domain. How can you simply run the algorithm and then find the domain containing all iterations? What is your step size in this case? Q2: thank you for the clarification. Best wishes reviewer --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the follow-up questions and would like to take this opportunity to address them in detail. We understand that our previous response statement "This assumption can be satisfied for compact manifolds and by restricting the domain of interest" (given as part of our response) might not be appropriate. What we mean is compact **subset** of a manifold, which allows unique-geodesic and geodesic convex functions. This is, for example, considered in [63] for analyzing geodesic (strongly) convex functions. All the theorems, i.e., Theorems 9, 10, 11, 12, 13, 14, 15 assumes a bounded domain and make use of the diameter of the domain $D$, which is defined in Theorem 9. With the above clarifications, we proceed to answer your individual questions. **1. On Assumption 1.** Regarding our Assumption 1, we now believe that our current phrasing in the paper may have caused some confusion. We wish to clarify that our analysis only requires the unique geodesic specifically for the manifold associated with the lower-level problem. We also highlight that the compactness requirement for the upper-level problem is not strictly necessary for analysis. The compactness assumption on the upper-level problem was initially included to facilitate a more natural interpretation of Assumptions 2 and 3. However, in contrast, for the lower-level problem, compactness on the domain and the uniqueness of the geodesic are essential for establishing linear convergence, based on [63]. We plan to rewrite Assumption 1 as follows. **Assumption 1.** All the iterates in the lower level problem are bounded in a compact subset that contains the optimal solution, i.e., there exists a constants $D_k > 0$, for all $k$ such that $d(y_k^s, y^*(x_k)) \leq D_k$ for all $s$. Such eighbourhoods admit unique geodesic. We take $D := \max_{k} (D_1, ..., D_k )$. **2. Could you please point out where is 'All iterates are contained in a compact neighbourhood' in that specific paper?** In Theorem 15 of [63], when analyzing convergence for geodesic-strongly convex functions, the the definition of parameter $\epsilon = \min (1 / \zeta(\kappa, D), \mu/ L_g )$ depends on the diameter of the domain $D$, which is defined in Theorem 9. This suggests the bounded domain is still assumed in the analysis, and thus corroborates our claim that the bounded domain assumption is unavoidable in this case. **3. On Assumption 2.** We remark that Assumption 2 is also made in the Euclidean bilevel analysis [35], even without assuming Assumption 1. See the first bullet point of Assumption 2 in [35] where function Lipschitzness implies bounded gradient. **4. Stepsize depend on selection of domain.** This is a misunderstanding. First, we do not 'select' a domain. The bounded domain is mainly introduced for analysis purpose as we have explained above. Second, we do not choose stepsize based on the domain. The stepsize is selected based on the Lipschitz constants of the objective function, i.e., from Assumption 2, 3. This is also the case for Euclidean bilevel analysis [35]. See, for example, Theorem 1 in [35]. As commented above, Assumption 2 can be made independent of the Assumption 1, and thus, the stepsize can be chosen independent of the domain. ------------------------- In summary, we sincerely thank you for all the questions, which have largely helped to improve the clarity of the paper. We will ensure a more clear presentation and discussions of the assumptions in our revised version. In particular, we will include the revised assumption as given above. Given the rebuttal discussion is coming to the end, we would greatly appreciate if you could kindly acknowledge our response if there are no further questions. Thank you again for your time.
Summary: This paper presents a framework for addressing bilevel optimization problems in which variables of both lower- and upper-level problems are constrained on Riemannian manifolds. It introduces multiple hypergradient estimation strategies on manifolds and investigates their estimation error. The paper includes convergence and complexity analyses for the proposed hypergradient descent algorithm on manifolds. Furthermore, it extends these advancements to stochastic bilevel optimization and the utilization of general retraction on the manifolds. The paper also demonstrates the practicality of the proposed framework through several applications. Strengths: 1. The paper is well-written and is easy to read. 2. This paper derives the intrinsic Riemannian hypergradient using the implicit function theorem, proposes four strategies for estimating the hypergradient, and provides estimation error bounds for all the proposed strategies. 3. This paper presents the Riemannian hypergradient descent algorithm for tackling bilevel optimization problems on manifolds, providing convergence guarantees. Furthermore, it extends the framework to the stochastic setting and incorporates retraction mapping on the manifolds. Weaknesses: 1. Firstly, my primary concern pertains to the novelty of this paper. The algorithm proposed in this paper appears to mirror the framework introduced in a prior work [42]. Therefore, it is crucial to delineate the similarities and disparities between this paper and the referenced study [42]. 2. The experimental results should include a comparison with the methods presented in [42]. 3. The analysis of retraction on manifolds lacks depth and should include the convergence results of three additional hypergradient estimators. 4. Assumption 1 seems to be excessively strict, which is unusual in common bilevel optimization. Moreover, the notation $D$ in $d(x_1,x_2)\le D$ may be confusing. Furthermore, Assumptions 2 and 3 could be inferred from Assumption 1, indicating that this paper may not necessitate Assumptions 2 and 3. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The notations $C_{hinv}$, $C_{cg}$, $c_{ns}$, and $C_{ad}$ in Theorem 1 are not utilized in the main body of the paper. 2. Can the stochastic bilevel problem studied in this paper be extended to the general stochastic setting, i.e., $\underset{{x\in \mathcal{M}_x}}{\min} F(x) = \mathbb{E}[f(x,y)]$ and $y^*(x) = \underset{y\in\mathcal{M}_y}{\text{argmin}}~ \mathbb{E}[g(x,y)]$? 3. For other questions, please see the weaknesses above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the general appreciation of our work as well as constructive comments that we address below. **1. (W1) Similarities and disparities compared to [42].** We would like to emphasize that [42] is a *concurrent* work and not a prior work. It became available (publicly) after we had completed the draft. Moreover, we would like to highlight several notable differences compared to [42], some of which have already been discussed in lines 48-61. (1) Our proposed framework is more general and subsumes the developments of [42]. In particular, Algorithms 1 and 2 in [42] are special cases of our proposed RHGD-CG and RSHGD-NS algorithms, respectively. While [42] use exponential map in their Algorithms 1 and 2, we study more general retraction, of which exponential map is one specific instance. It should be noted that our contributions is not limited to CG variants of proposed RHGD algorithm as we study other hypergradient estimation strategies as well. Overall, we provide hypergradient estimation error bound for a variety of estimation strategies and numerically compare them in terms of convergence, estimation error (Figure 1) and runtime (Table 3). (2) We present many applications that require to solve a Riemannian bilevel optimization problem. Many of them are of independent interest. (3) In Appendix G, we have shown how the proposed framework also leads to a convergence analysis for Riemannian min-max optimization and Riemannian compositional optimization problems, where they are special instances. To summarize, our work provides a more general and practical framework than [42]. **2. (W2) Experiment comparison with the methods presented in [42].** We have now included numerical comparisons in the one-page supplementary material. Specifically, we compare the performance on the synthetic problem (Figure 1) and hyper-representation task (Figure 2). We see the use of retraction in our case improves the efficiency of the solvers, which is reflected in the reduced runtime to achieve convergence. **3. (W3) The analysis of retraction should include the results of three additional hypergradient estimators.** Thank you for the suggestion. Below we include the convergence results for the three additional hypergradient estimators and will make sure to include them in the final revision. **Theorem.** (Extension of Theorem 3) Under Assumptions 1,2,3,5 and let $\tilde L_F = 4 \kappa_l c_R M + 5 \bar c L_F$. Consider Algorithm 1 with the following settings. - HINV: Suppose $\eta_x = \Theta(1/\tilde L_F), S \geq \tilde \theta(\kappa_l^2 \zeta)$. Then $\min_{k=0,..., K-1} \| \mathcal{G} F(x_k) \|^2_{x_k} \leq 16 \tilde L_F \Delta_0/K$. - CG: Suppose $\eta_x = \Theta(1/\tilde \Lambda), S \geq \tilde \theta(\kappa_l^2 \zeta), T \geq \tilde \Theta(\sqrt{\kappa_l})$, where $\tilde \Lambda = C_v^2 \bar c + \kappa_l^2 ( \frac{5M^2 C_0^2 D^2}{\mu} + \bar c)$, then $\min_{k=0,..., K-1} \| \mathcal{G} F(x_k) \|^2_{x_k} \leq 96 \tilde \Lambda (\Delta_0 + \| v_0^* \|^2_{y^*(x_0)})/K$. - NS: Suppose $\eta_x = \Theta(1/\tilde L_F), S \geq \tilde \Theta(\kappa_l^2 \zeta)$. Then for any $\epsilon>0$, $T \geq\tilde \Theta(\kappa_l \log(1/\epsilon))$, we have $\min_{k=0,..., K-1} \| \mathcal{G} F(x_k) \|^2_{x_k} \leq 16 \tilde L_F \Delta_0/k + \epsilon/2$. - AD: Suppose $\eta_x = \Theta(1/\tilde L_F)$, $S \geq \tilde \Theta(\kappa_l^2 \zeta \log(1/\epsilon))$ for any $\epsilon > 0$, we have $\min_{k=0,..., K-1} \| \mathcal{G} F(x_k) \|^2_{x_k} \leq 16 \tilde L_F \Delta_0/k + \epsilon/2$. The proof basically contains three parts, where we first show the convergence of inner iterations in terms of retraction, which is agnostic to the choice of hypergradient estimator. Then we bound the hypergradient estimation error where only AD based hypergradient needs different treatment when using retraction because it requires to differentiate through the retraction. For other hypergradient estimators, the bounds in Lemma 1 still hold given no retraction is used in their computation. Lastly we show the defined Lyapunov function is monotonically decreasing. **4. (1) Assumption 1 is strict and unusual in bilevel optimization. (2) The notation $D$ in $d(x_1, x_2) \leq D$ is confusing. (3) Assumptions 2 and 3 could be inferred from Assumption 1 and the necessity to include Assumption 2, 3.** (1) Assumption 1 is often unavoidable in the literature of Riemannian optimization, especially when dealing with geodesic (strongly) convex objectives [21, 37, 62, 63, 64]. This is required to ensure bounded curvature in order to apply the trigonometry distance bound (Lemma 3), which is crucial for achieving linear convergence of the inner iterations. (2) For notation clarity on $d(x_1, x_2) \leq D$, we will change the notation in the revised manuscript. (3) It is correct that Assumption 2 and 3 can be inferred from Assumption 1. Nevertheless, we explicitly state such assumptions for clarity and for properly defining the Lipschitz constants. **5. (Q1) The notations $C_{hinv}$, etc in Theorem 1 are not used in the main text.** We will move such notations to appendix. **6. (Q2) Extension to the general stochastic setting?** Yes, the proposed framework can accommodate the general stochastic setting. To this end, we require to construct unbiased gradient estimate and properly control its variance. A more formal treatment of such general case is left for future exploration. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses. I am raising my score to 6.
Summary: This paper introduces a novel approach for solving bilevel optimization problems where both upper and lower-level variables are constrained on Riemannian manifolds. The authors propose four hypergradient estimation strategies (HINV, CG, NC, AD), analyze their estimation errors with convergence and complexity analysis, and generalize to stochastic bilevel optimization and general retraction. The paper also shows many applications of the framework including hyper-representation over SPD matrices, Riemannian meta-learning, and unsupervised domain adaptation. Strengths: 1. The proposed framework extends bilevel optimization to Riemannian manifolds, which fills a gap in this field. 2. The paper introduces multiple hypergradient estimation strategies and provides a thorough theoretical analysis. 3. Practical Relevance: Demonstrates the applicability of the framework through several machine learning applications, including hyper-representation and meta-learning. Weaknesses: 1. Assumptions: The framework relies on assumptions such as geodesic strong convexity and Lipschitz continuity, which may limit its applicability in some practical scenarios. 2. The practical performance evaluation is limited to synthetic and small-scale problems, which may not fully represent its utility in large-scale real-world applications. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. What are the computational trade-offs when choosing between the different hypergradient estimation strategies? Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: The authors acknowledge that the framework relies on some strong mathematical properties that limit its broader applicability and may be improved in the future. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging that our theoretical analysis is thorough and our work has practical relevance. We also appreciate the constructive feedback. **1. (W1) Assumption such as geodesic strong convexity and Lipschitz continuity.** The assumption of (geodesic) strong convexity is common for analyzing bilevel optimization, which is also the case even in the Euclidean space [35,40,53]. This is because without strong convexity, the lower-level problem can be ill-defined, where there may exist many local solutions. Studying bilevel problem with non-(geodesic)-strongly-convex lower-level problem is even challenging in the Euclidean space. Nevertheless, it is possible to relax such assumption to (geodesic) convexity with an extra strongly convex regularizer, or to (Riemannian) PL condition where a global minimizer exists. On the other hand, the assumption on Lipschitzness is also standard in the bilevel literature such as [35,40,46]. **2. (W2) Empirical evaluation is limited to synthetic and small-scale problems.** We would like to emphasize that in addition to the synthetic and small-scale setting, we evaluate our approach on real-world scenarios such as unsupervised domain adaptation on Office-Caltech dataset (Section 4.4) and meta-learning where we optimize a convolutional neural network (CNN) with orthogonal constraints on mini-Imagenet dataset (Section 4.3). In particular, the meta-learning example leads to a large-scale problem that involves training a 3-layer CNN, with each layer kernel parameters constrained to Stiefel manifold. This results in a parameter set of $3 \times 144 \times 16$ parameters. In addition, we train over 20,000 tasks where each task contains 50 images of size $84 \times 84$ (5-ways, 5-shots). We will make sure to include the details on experiment setup in the revised version. **3. (Q1) Computational trade-offs when choosing between the different hypergradient estimators.** We have discussed computational trade-offs in Table 1 and Appendix H.4 (together with convergence plots in Figure 1), where we compare the runtime for evaluating hypergradient for different estimation strategies. In summary, CG is the most computational demanding but yields the best approximation while AD requires the least computational effort but gives a poor hypergradient estimation. NS balances the computation and estimation error. --- Rebuttal 2: Comment: Thanks so much for the responses. I will keep my rating and raise my confidence.
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for the reviews. We are particularly encouraged by the numerous positive comments on our work, including, **well-written** (Reviewer Bnza, Reviewer 4rfq), **comprehensive analysis** (Reviewer CDbu, Reviewer kqCt, Reviewer JBnb), **solid theoretical results** (Reviewer CDbu, Reviewer 4rfq), **interesting applications** (Reviewer CDbu), **extensive numerical results** (Reviewer CDbu, Reviewer kqCt, Reviewer JBnb). Additionally, we greatly appreciate your constructive feedback, which has significantly helped us improve the paper. In response to your comments, we have - addressed each point individually, and - have attached a supplementary one-page PDF containing numerical comparisons with [42]. We hope we have addressed all your concerns and questions through our responses as well as the additional experimental results. We look forward to more interactive discussions during the discussion phase. Pdf: /pdf/538719ecd51c492239615b2f02d06731b8db82ef.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper addresses bilevel optimization problems where variables of both the lower and upper-level problems are constrained on Riemannian manifolds. To solve such problems, the authors propose a hypergradient descent algorithm under the key assumption that the lower function is geodesically strongly convex and establish its iteration complexity. To efficiently estimate the hypergradient, several methods with error bounds are also proposed. Additionally, the paper provides several interesting application examples of bilevel optimization on Riemannian manifolds and presents numerical results for these examples. Strengths: - This paper generalizes the widely used hypergradient descent algorithm to the Riemannian manifold setting and provides a comprehensive analysis of the proposed algorithms. - The paper includes four approaches to estimating the inverse of the Riemannian Hessian and a stochastic version of the proposed algorithm. - The theoretical analysis is solid, with estimation error bounds and the iteration complexity of the proposed algorithm presented. - The numerical results are extensive, showcasing results on four interesting Riemannian bilevel optimization application examples. Weaknesses: - A closely related work is [42], and although the authors claimed key differences in Section 1, some points should be clarified. - The various hypergradient estimators beyond conjugate gradient and Neumann might be standard. Do the other estimators show better performance? - While the paper presents interesting applications (see instances given in Section 4.2), these problems can also be solved by the methods in [42]. The authors should provide a numerical comparison of the methods. - Shall the authors include the complexities of the methods in [42] in Table 1? - The estimations of the inverse Hessian are also widely used in Euclidean space. - The assumption of geodesic strong convexity for the lower function is too strong for the Riemannian setting, making the application examples in the paper less typical. Additionally, it should be noted that the lower-level problem in Section 4.2 is constrained in Euclidean space. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why are the performances of “hinv-20” and “hin-50” similar in Figures 1(a) and 2(a)? 2. Why is “hinv” more efficient, as shown in Figure 1(b)? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the limitations are acknowledged. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for positive comments on recognizing our solid theoretical analysis and extensive numerical results. We also greatly appreciate the constructive feedback and comments. **1. (W1) Clarification regarding [42]: (1) Do other estimators show better performance? (2) Provide numerical comparisons with [42]. (3) Include complexities of methods in [42].** Thank you for raising these points. Below, we address each one individually. (1) There exists a accuracy-runtime trade-off for these estimators. The trade-off can be observed from Figure 1 where we compared various estimators in terms of convergence, hypergradient error and runtime. We have also additionally compared per-hypergradient runtime for these estimators in Table 3 (Appendix H.4). From the figures and table, we see hypergradient based on automatic differentiation requires the least runtime while suffers from a poor hypergradient approximation accuracy. Neumann series based estimators balance the trade-off while conjugate gradient requires the most runtime but has the lowest approximation error. Choosing a suitable estimator depends on specific applications. (2-3) We highlight that the methods presented in [42] are covered by the framework that we propose, i.e., Algorithm 1 in [42] coincides with RHGD-CG and Algorithm 2 in [42] is RSHGD-NS that we propose, except that they use exponential map for the update, while our framework works for general retraction. Thus, the methods in [42] can be considered as special cases in our framework where we have already included the complexities. *Please find numerical comparisons with [42] in the one-page supplementary material.* In particular, we compare on the synthetic problem (Figure 1) and shallow and deep hyper-representation (Figure 2). We observe the use of retraction in our case results in more efficient update and thus reduces the runtime for convergence. **2. (W2) The estimations of the inverse Hessian are also widely used in Euclidean space.** While it is true that the estimation strategies for inverse Hessian have been explore in the Euclidean space, in more general manifold setting, certain more care is needed when using second-order derivatives. For example, $\mathcal{G}^2_{xy}$ (used in Algorithm 1) is a second-order cross derivative relating two different manifolds (one for x and the other for y) and defining this is tricky. Additionally, Hess is now an *operator on the tangent space* instead of a matrix in the Euclidean space. Consequently, its inverse needs to be properly characterized and computed. **3. (W3) (1) Assumption of geodesic strong convexity is strong. (2) Lower-level problem in Section 4.2 is in Euclidean space.** (1) Analyzing bi-level problem with non-(geodesic)-strongly-convex lower-level problem is tricky (even in the Euclidean setting) because of non-unique solution of the lower-level problem. Non-unique solution would give rise to several challenging cases such hypergradient becoming ill-defined, invertibility of the Hessian, etc. However, it is possible to relax this assumption to geodesic convex lower objective, by adding a strongly convex regularizer term to the objective function. (2) Since Euclidean space is a special case of Riemannian manifold, it is not unreasonable to consider problems where lower-level problem is in the Euclidean space. However, we would like to emphasize that Sections 4.1 and 4.4 explore lower-level problems constrained to the SPD manifold. **4. (Q1) Why performances of hinv-20 and hinv-50 similar?** It should be noted that the numbers 20 and 50 refers to the number of inner iterations for solving the lower-level problem. The similar performance of hinv-20 and hinv-50 is because the lower-level problem for such synthetic problem is easy to solve and 20 iterations already provides a good close-to-optimal solutions. **5. (Q2) Why hinv more efficient as in Figure 1(b)?** In this synthetic example, we can derive an analytic form of the Hessian inverse. In addition, the problem dimension is not large (d = 50) and thus computing the Hessian inverse directly is more efficient in this setup. --- Rebuttal Comment 1.1: Comment: Thank the authors for clarification. I have increased my score to 6.
null
null
null
null
null
null
Demystify Mamba in Vision: A Linear Attention Perspective
Accept (poster)
Summary: This paper presents a thorough analysis of the key factors contributing to the success of the S6 module in Mamba model, and introduces a new linear attention vision network, MLLA, inspired by the S6’s design. Extensive experimental results show outstanding performance, validating the effectiveness of the proposed model. Strengths: The motivation and analysis underlying this work are fascinating and perceptive, offering a profound examination of the mathematical relationships between Mamba and Linear Attention. A thorough investigation of the performance contributions of each component from S6 is conducted, with corresponding refinements made to linear attention, ultimately leading to the development of MLLA. The proposed MLLA serves as a generic vision backbone network, which outperforms recent Mamba-based models, demonstrating its superior capabilities Weaknesses: I appreciate the paper's insightful analyses, but several key concerns regarding methodology and experiments need to be addressed. If the authors fully resolve these issues, I would be willing to reconsider my evaluation. **Possible inaccurate analysis of single-head attention** The analysis of single-head attention appears to be inaccurate and unclear. Generally, multi-head attention in Transformer generates H dynamic matrices during attention computation, where H represents the number of heads. However, the paper seems to interpret S6 as a single-head module without providing sufficient evidence. The authors should explain this point from the perspective of the number of attention matrices generated. Moreover, according to Reference 1, S6 can actually produce multiple dynamic matrices instead of just one. **Unfair comparison on ImageNet-1K dataset** The paper uses MESA as its optimizer, improving accuracy compared to AdamW. In contrast, the methods in Table 3 do not use MESA and may also be prone to overfitting. For fairness, authors should ensure that training strategies are consistent with competitors. Moreover, the authors should not hide this apparent difference in the appendix. **Questionable motivation for using Swin as the baseline** The motivation for using Swin as a baseline is questionable. One important aspect of linear attention is its ability to efficiently establish global dependencies, so it is unclear why window-based linear attention is used. Additionally, the specific configuration of the baseline model, such as channels and depth, is not provided. **Unclear setting of d_state in SSM block design** In the block design, it is unclear what value d_state of SSM is set to. **Insufficient experiments on downstream tasks** (1) Semantic segmentation should include tiny, small, and base models. (2) Mask R-CNN 3x should include the base model. (3) The performance comparison of MLLA-T and VMamba-T on object detection is lacking. **Unfair comparison in Table 6** The comparison in Table 6 is unfair. The authors aim to compare the performance of different linear attention methods, but different block settings have a significant impact. The authors should use the same architecture, replacing only token mixer module. **Typo** In line 294, 'SMM' should be 'SSM'. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the main weaknesses. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Please see the main weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would first like to express our appreciation for your time and insightful comments. Please find our response to your concerns in the following: --- **1. The analysis of single-head attention.** Thanks for your insightful comment. - ***The concept "head" in our paper is the same as its original definition in multi-head attention [1], i.e. the number of groups of $Q, K$, rather than the number of dynamic matrices.*** Since there exists only one set of $C,B$ in S6 (see Eq. 11 of our paper), ***it resembles single-head linear attention.*** - ***A head can possibly generate multiple dynamic matrices.*** In multi-head attention, one head can only generate a single dynamic matrix $QK^\top$. However, a head of S6 can produce multiple equivalent dynamic matrices. This can be attributed to its special input gate and forget gate design. Specifically, as shown in Eq. 11 and Eq. 12 of our paper, there is only one set of $C\in\mathbb{R}^{N\times d}, B\in\mathbb{R}^{d\times N}$ in S6. Therefore, without considering the input gate $\Delta_i$ and forget gate $\hat{A}_i$, the model can produce only one dynamic attention matrix $CB\in\mathbb{R}^{N\times N}$. However, the input gate $\Delta_i$ and forget gate $\hat{A}_i$ make things different. Specifically, $\Delta_i\in\mathbb{R}^{1\times c}$ and $\hat{A}_i\in\mathbb{R}^{d\times c}$ are not shared across channels $c$, which enables S6 to produce $c$ equivalent dynamic matrices with the single set of $C, B$. Intuitively, S6 generates one attention matrix $CB\in\mathbb{R}^{N\times N}$, yet each channel filters tokens differently based on its input gate weight and introduces unique local biases with its forget gate weight, leading to varied equivalent dynamic matrices. - ***In conclusion, S6 uses the single-head design, but its input gate and forget gate lead to multiple equivalent dynamic matrices.*** - Additionally, we can introduce multi-head design to S6 by initially generating multiple sets of $C^h\in\mathbb{R}^{N\times d}, B^h\in\mathbb{R}^{d\times N},h=1,\cdots,\rm{num\\_heads}$, akin to multi-head attention. [1] Attention is all you need. In NeurIPS, 2017. --- **2. The comparison on ImageNet-1K dataset.** ***Firstly***, MESA was not utilized in ablation experiments or downstream tasks. These results already fully support the two core findings of this paper: (1) The forget gate and block design are the key designs of Mamba. (2) With the merits of these two key designs, MLLA can outperform various vision Mamba models. ***Secondly***, we clarify MESA as follows: - MESA is an overfitting prevention strategy rather than an optimizer. - ***Without MESA, MLLA model can also achieve comparable results.*** For example, after removing MESA and increasing drop path rate—a commonly used strategy to prevent overfitting—to 0.3, MLLA-T yields 83.3 accuracy on ImageNet, which still significantly surpasses various SOTA vision Mamba model, e.g. LocalVMamba-T: 82.7, VMamba-T: 82.5, etc. - In our early experiments, we found MESA slightly beneficial for MLLA model, but it did not benefit other models as well and even hindered their performance. --- **3. The motivation for using Swin as the baseline.** - ***Window-based attention is not used.*** The shifted window attention in Swin Transformer is replaced with global linear attention to create our baseline model. ***We only employ the macro structure of Swin***, i.e. width, depth and etc., as a clean architecture. - The baseline model shares the identical configuration as Swin-T, with widths = [96, 192, 384, 768] and depths = [2, 2, 6, 2]. --- **4. The setting of d_state in SSM block design.** The concept of d_state is akin to the head_dim in attention. Therefore, following Swin Transformer, we set it as 32 for all the models in our study. --- **5. The experiments on downstream tasks.** - MLLA's effectiveness on downstream tasks is fully validated by comprehensive COCO object detection and instance segmentation experiments. Hence, we conduct only one experiment on semantic segmentation task to save computation resources. - As requested, we provide additional results of semantic segmentation task below. | Backbone | \#Params | FLOPs | mIoU SS | mIoU MS | | :------: | :------: | :---: | :-----: | :-----: | | VMamba-T | 62M | 949G | 47.9 | 48.8 | | MLLA-T | 55M | 932G | 48.1 | 49.0 | Due to time and resource constraints, we couldn't provide the MLLA-S results here. - To the best of our knowledge, the result for Mask R-CNN 3x + base model is not provided by any vision Mamba models we compared with. Hence, we also omit this experiment to save computation resources. - Full comparison with VMamba-T on object detection is provided **in the PDF in general response**. MLLA-T achieves comparable results to VMamba-T in the 3x setting but slightly underperforms VMamba-T in 1x training. This can be attributed to MLLA-T having 6M fewer parameters than VMamba-T, possibly requiring longer training to converge. MLLA-S/B significantly outperform VMamba-S/B in both 1x and 3x settings. --- **6. The comparison in Table 6.** - As detailed in our paper, ***the block structure is also a key design of Mamba and our MLLA.*** Hence, we provide a direct comparison with other linear attention methods in Table 6. to validate the effectiveness of our design. - Here we offer additional comparison under exactly same macro architecture, the Swin-T structure. | Method | \#Params | FLOPs | Acc. | | :-----------------: | :------: | :---: | :--: | | Hydra Attention | 29M | 4.5G | 80.7 | | Efficient Attention | 29M | 4.5G | 81.0 | | FLatten Transformer | 29M | 4.5G | 82.1 | | MLLA | 30M | 4.8G | 82.6 | MLLA shows obviously better result. It is noteworthy that we are unable to provide results for SOFT under this setting since it requires more than 1 hour to train one epoch. --- **7. Typo.** Thanks. It will be corrected. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response, which has addressed some of my concerns. Although I understand that the purpose of this paper is to establish connections between various mechanisms in Mamba and Linear Attention, the main issue remains the unfair experimental implementation. The authors should provide results without MESA in Table 3 for tiny, small, and base models, as MESA can boost performance like token labeling, and this should be explicitly indicated. The current presentation may mislead follow-up works due to the unfair training conditions. The authors' claim that other models do not benefit from MESA lacks any supporting evidence. Regarding downstream tasks, while the authors clarify that they did not use MESA, it’s important to note that the pre-trained models have incorporated MESA. This introduces an inherent unfairness in weight initialization, which may affect the fairness of the final performance comparisons. The authors mention that other Mamba-based models have not been tested on 3x schedule-based detection tasks, which raises a question: are Mamba-based models unable to train successfully on 3x schedule-based detection tasks? If so, the potential applications of Mamba-based models would be significantly limited. Additionally, please note that the proposed method belongs to the vision transformer family, not Mamba-based models, and it is crucial to conduct related experiments as listed in many previous works. Reporting results with the 3x schedule is more meaningful, as this allows the models to converge to a more consistent outcome, providing a more reliable comparison. The author should be aware that one of the most crucial aspects of a vision backbone paper is the experimental section, which must adhere to highly consistent training conditions with other competitors to ensure a fair comparison. Implicitly introducing additional tricks to improve performance is misleading and unfair to follow-up works. I must reiterate that the unfair experimental design remains a significant concern. --- Reply to Comment 1.1.1: Comment: We would like to express our appreciation for your comments. However, we believe there are some misunderstandings and we offer clarifications below. --- **1. MESA does not affect the three main contributions and findings of this work.** We would like to emphasize that the three main contributions of our paper remain unaffected by the use of MESA: - We reveal Mamba’s close relationship to linear attention Transformer. - We provide detailed analyses of each special design and validate that the forget gate and block design largely lead to Mamba’s superiority. - We present MLLA, a novel linear attention model that outperforms vision Mamba models. The first finding is thoroughly analyzed in Section 4 of our paper. The second one is validated by the results in Table 1 and Table 2, ***where MESA is not employed.*** The last contribution of our work, MLLA, uses MESA in its training. However, as we already mentioned in our previous response, without MESA, MLLA-T can also achieve 83.3 accuracy and still significantly surpasses various vision Mamba models. Notably, the MLLA model is built to validate our last finding, i.e. whether linear attention can match or surpass Mamba in vision, rather than to compete against SOTA vision Transformers. ***In conclusion, MESA does not influence the core contributions of our work, but rather serves as an additional strategy to help MLLA performs optimally.*** We further clarify our intent and the reason for using MESA in the following. --- **2. The reason for using MESA.** We would first like to clarify that MESA is just a strategy to prevent overfitting, and it cannot boost performance like token labeling. Token labeling benefits from ***a pre-trained model***, functioning similarly to distillation, whereas MESA only enhances the model's generalization ability and ***doesn't use any pre-trained models.*** Just like in the early stages of studies on visual Transformer, currently vision Mamba research does not have a well-established and universally accepted training protocol. The conventional training setting for vision Transformer may not be optimal for vision Mamba and our Mamba-Like Linear Attention. Therefore, we additionally employ the overfitting prevention strategy MESA to alleviate the overfitting problem of MLLA model and fully demonstrate its potential. Our goal is to provide the community with more robust models. We believe that excessive pursuit of strictly unchanged training setting could actually restrict the exploration of new architectures. --- **3. Results without MESA.** - To better address your request, we further provide the results without MESA. - Actually, we already provided the result for ***MLLA-T trained without MESA*** in our previous response. Here, we offer a comparison with vision Mamba models based on this result. | Model | #Params | FLOPs | Acc. | | :---------------: | :-----: | :---: | :--: | | Vim-S | 26M | 5.1G | 80.3 | | VMamba-T | 31M | 4.9G | 82.5 | | LocalVMamba-T | 26M | 5.7G | 82.7 | | MLLA-T (w/o MESA) | 25M | 4.2G | 83.3 | ***As you can see, without MESA, MLLA-T can also achieve comparable result and still significantly surpass various SOTA vision Mamba models.*** - We are currently in the process of training ***MLLA-S/B under the no MESA setting*** and will provide those results ***in a few days.*** - Furthermore, we will utilize the backbone trained without MESA to conduct downstream tasks. - All these results will be included in the revised manuscript. We will provide more experimental results within our capabilities and try our best to benefit the community and follow-up works. --- **3. Downstream tasks.** - There seems to be a misunderstanding. In our previous response, we stated that Mask R-CNN 3x results for ***base level model, e.g. VMamba-B,*** is hardly reported by vision Mamba works. However, we did not claim that Mamba-based models have not been tested on 3x schedule. Indeed, we already provided comparison of ***tiny and small level models under 3x schedule in Table 9. of our paper.*** - The primary focus of our paper is on comparisons with vision Mamba models, rather than achieving SOTA results. Given that the works we compared with do not present 3x detection results for base level model, we also omit this experiment previously. Currently, we are working on this experiment to better address your request.
Summary: The paper takes a deep dive into the workings of Mamba in vision related tasks and compares it to Linear Attention. They conclude that Mamba is a special form of Linear Attention and describe the role that certain parts in the architecture and give them a name that better represent their actual use. They come up with input and forget gate that control the flow of information while shortcuts are closely related to the residual connections. The authors take those learnings and apply them to linear attention and show that those concepts do improve the linear attention mechanism up to the point where it beats mamba. Strengths: The authors draw clear connections between Linear Attention and Mamba which is showcased in terms of raw similarity (eq. 11 and 12). Furthermore, they underline this by showing the input gate values on two example images which helps general understanding. They demonstrate the usefulness of the explained modules in Mamba apply them to Linear Attention and show that they do improve performance. In total a sound paper. The overall gist can be followed very nicely. Weaknesses: No apparent weaknesses Technical Quality: 4 Clarity: 4 Questions for Authors: - Why did you choose to use Swin Transformer? - The forget gate creates local bias and position information but also decays the previous hidden state if I understood correctly. When replaced by RoPE or similar the performance increases as shown in Table2. Does this mean that the local bias and the decay can hinder performance? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors mention the limitations of their work, especially that is it no exhaustive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would first like to express our appreciation for your time and insightful comments. Please find our response to your concerns in the following: --- **1. Reason for using Swin Transformer.** We offer clarification on employing Swin Transformer: - Swin Transformer is a widely used architecture in vision tasks. ***We use it as a clean and fair structure*** to validate the effectiveness of each special design of Mamba. - Notably, the core operation of Swin Transformer, shifted window attention, is replaced with global linear attention to from our baseline model. Therefore, ***we only employ the macro structure of Swin***—such as width and depth—without employing its window-based attention mechanism. --- **2. Replacing the forget gate with positional encoding.** - Thanks for the insightful question. Yes, the forget gate decays previous hidden state, thus providing local bias and position information. - ***Local bias and decay do not hinder performance.*** Instead, they are beneficial for the model. For example, RoPE provides long-term decay and local bias, as demonstrated in its paper [1]. As shown in Table 2 of our study, integrating RoPE into the baseline linear attention model leads to an obvious performance improvement from 77.6 to 80.0. This indicates that local bias and decay can enhance model effectiveness. - ***The reason why positional encodings outperform forget gate in Table 2 is that they can enjoy a global receptive field.*** As analyzed in our paper, the forget gate has to employ recurrent calculation, which restricts the receptive field of each token to the preceding sequence, resulting in a causal mode. As verified by a concurrent work, MambaOut [2], such a causal mode can lead to notable performance drop in vision models (see its Fig. 3). Therefore, while the local bias and decay of the forget gate can benefit the model, its causal mode hinders performance. In contrast, positional encodings enable the model to benefit from both local bias and a global receptive field at the same time, thus yielding better results than the forget gate. [1] Roformer: Enhanced transformer with rotary position embedding. Neurocomputing. [2] MambaOut: Do We Really Need Mamba for Vision? arXiv:2405.07992. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their clarifications. I will keep my score unchanged. --- Rebuttal 2: Comment: Dear Reviewer dwbt, thank you for your insightful review and for engaging with our work. We would like to know if there are any additional questions or concerns. We are eager to engage in further discussion and provide clarification to fully address them.
Summary: This paper explores the similarities and differences between the Mamba and linear attention Transformer models. It redefines Mamba as a variant of linear attention Transformer with six key distinctions. The paper also studies each design's impact, identifying the forget gate and block design as key to Mamba's success. Based on these findings, the paper introduces a Mamba-like linear Attention (MLLA) model, which outperforms other vision Mamba models on various tasks while maintaining fast computation and inference speed. Strengths: * The paper is well-written and well-motivated from the perspective of linear attention. * The findings are interesting to me. Weaknesses: * Several works using Mamba for image synthesis should be discussed [1][2][3]. In Line 68, I wonder whether the exploration in this paper is orthogonal to previous explorations on Mamba. For example, Zigma considers layerwise scanpaths with 8 directions. Can the proposed method still yield some improvements on that result? * Another consideration is that the authors should discuss something with this paper: https://arxiv.org/abs/2406.06484. Check Table 4; it provides a much more general framework that includes most of the linear attention models. * My third concern is whether the exploration in this work can extend to other linear attention-based models, such as RWKV and xLSTM. Technical Quality: 3 Clarity: 3 Questions for Authors: I have some concerns (see weaknesses) about this paper. I am inclined to raise my score if my concerns are fully resolved. **Reference:** [1],Diffusion Models Without Attention.CVPR24. [2],ZIGMA: A DiT-style Zigzag Mamba Diffusion Model,ECCV24. [3],Scalable Diffusion Models with State Space Backbone,Arxiv. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, they are addressed and discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would first like to express our appreciation for your time and insightful comments. Please find our response to your concerns in the following: --- **1. Mamba for image synthesis.** - Thanks for pointing out these works. ***We will give more credits to them and include detailed discussions in the revised version.*** - The exploration in this paper is orthogonal to previous explorations on Mamba, and ***our work can further benefit previous works in two ways***: - ***a. Previous studies on Mamba can substitute their SSM with our MLLA block***, which is proved to be more effective and efficient. - ***b. The analyses in our work can help other studies design better Mamba models.*** Our work reveals some suboptimal designs of Mamba, such as the absence of attention normalization, an important component. Hence, previous works on Mamba, e.g. Zigma, can add attention normalization to their model to achieve better results. This enhancement is compatible with other explorations, such as layer-wise scan paths with 8 directions. - ***This paper focuses on analyzing.*** Currently we mainly conduct classification, object detection and semantic segmentation experiments, which already validate our analyses and findings. In the future, we will apply MLLA to image synthesis task to develop efficient models. --- **2. Discussion with the mentioned paper.** - Thank you for pointing this out. ***The mentioned paper was made public on June 10, 2024, after the NeurIPS submission deadline of May 22, 2024.*** - The mentioned work proposes a unified framework for efficient autoregressive sequence transformations, demonstrating various linear attention models within this framework. Despite some similarities with our work, there are fundamental distinctions: - a. Our work not only reveals the close relationship between Mamba and linear attention, but more importantly, ***provides analyses and experiments to verify the effectiveness of each design.*** In contrast, the referenced work mainly develops ***a framework to include*** different linear attention models. - b. ***Our analyses ultimately leads to the development of a novel linear attention model, MLLA***. In contrast, the mentioned work focuses on the development of ***efficient training algorithms for existing models.*** - ***We will discuss with this paper in the revised manuscript.*** --- **3. Extension to other linear attention-based models.** - ***The exploration in this work can be extended to other linear attention-based models.*** - Our work reveals that Mamba can be viewed as a special variant of vanilla linear attention. Other linear attention-based models, e.g. RWKV, xLSTM, can also be viewed as variants of vanilla linear attention. Therefore, the exploration and insights in this work can naturally extend to these models. - ***We take xLSTM as an example.*** According to Eq. 4 of our paper, linear attention can be written as: $$ S_i=S_{i-1}+K_i^\top V_i,\quad Z_i=Z_{i-1}+K_i^\top, \quad y_i=Q_i S_i / Q_i Z_i $$ Using the same notations, xLSTM can be formulated as: $$ S_i=F_i S_{i-1}+I_i K_i^\top V_i,\quad Z_i=F_i Z_{i-1}+I_i K_i^\top, \quad y_i=Q_i S_i / \text{max}\\{1, Q_i Z_i\\}, $$ where $I_i$ is input gate and $F_i$ is forget gate. Additionally, xLSTM employs multi-head design according to its paper. ***Therefore, xLSTM resembles linear attention with additional input gate $I_i$, forget gate $F_i$ and modified normalization $\text{max}\{1, Q_i Z_i\}$.*** As a result, many explorations of this work can be extended to xLSTM. For instance, we can make ***bold speculations*** like: - a. The forget gate $F_i$ is likely to play a crucial role in xLSTM. - b. When applying xLSTM to vision models, the forget gate could possibly be replaced with positional encodings. Just like what we did in Table 2. - c. The input gate $I_i$ may not offer significant benefits in vision tasks. - d. Enhanced block designs from Mamba could potentially benefit xLSTM. - e. The modified normalization $\text{max}\\{1, Q_i Z_i\\}$ might provide greater stability compared to vanilla linear attention, warranting further exploration. - ***In conclusion, the analyses and insights of our work can extend to other linear attention-based models, helping them improve interpretability, further enhance models, etc.*** --- Rebuttal Comment 1.1: Comment: Thank you for your response. After reading the response and other reviewers' comments, I believe that my concerns are mostly resolved and I will increase my rating from 4 to 6. --- Reply to Comment 1.1.1: Comment: Thank you again for your time and valuable comments.
Summary: This paper primarily discusses the similarities and differences between the Mamba model and the Linear Attention Transformer, and conducts an in-depth analysis of the key factors contributing to Mamba's success in visual tasks. The paper elaborates on six major design differences of Mamba compared to the Linear Attention Transformer: input gate, forget gate, shortcut connection, attention-free normalization, single-head attention, and modified block design. For each of these designs, the author meticulously analyzes their advantages and disadvantages, and evaluates their applicability in visual tasks. Through experimental research, the paper emphasizes that the forget gate and block design are the core contributors to Mamba's success. Based on these findings, the author proposes a model called Mamba-Like Linear Attention (MLLA), which integrates the advantages of the forget gate and block design into the linear attention framework. Experimental results show that MLLA outperforms various visual Mamba models in image classification and high-resolution dense prediction tasks, while maintaining parallel computing capabilities and fast inference speed. Strengths: 1. The paper provides a new perspective for understanding Mamba's success by thoroughly analyzing the similarities and differences between the Mamba model and the Linear Attention Transformer. By rephrasing formulas, the paper considers Mamba as a variant of the Linear Attention Transformer and clearly points out Mamba's six design features, including input gate, forget gate, shortcut connection, attention-free normalization, single-head attention, and modified block design. 2. Each design aspect is meticulously analyzed, and its advantages and disadvantages in visual tasks are evaluated. Experimental results verify that the forget gate and block design significantly contribute to Mamba's performance improvement, while other designs either have minimal marginal contributions or may potentially harm model performance. 3. The Mamba-Like Linear Attention (MLLA) model is proposed, combining the advantages of the forget gate and block design in linear attention. The MLLA model outperforms existing Mamba models in image classification and high-resolution dense prediction tasks while maintaining computational parallelization and high-speed inference capabilities. Weaknesses: There are some minor issues: 1. In the final MLLA model, there is no SSM but a forget gate. The forget gate was originally proposed in RNN and LSTM. Thus calling it a Mamba-like model seems to be improper. 2. Previous "Demystify" papers are not cited in the paper, e.g., [a, b, c]. The final architecture MLLA follows the macro structure of Swin and [a,b,c] all show that the macro structure is very important but the local blocks are not so important. The paper should build a connection with the previous "Demystify" papers and tell us what are the most effective parts? [a] On the Connection between Local Attention and Dynamic Depth-wise Convolution, ICLR 2022 [b] What Makes for Hierarchical Vision Transformer? DOI: 10.1109/TPAMI.2023.3282019 [c] Demystify Transformers & Convolutions in Modern Image Deep Networks, arXiv:2211.05781 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How many GPUs are used to perform this research, including the training cost and ablation experiments cost? 2. Why Fig. 1 is placed on page 2 but cited on page 4? 3. What are the results of Mamba + Swin? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Nope. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would first like to express our appreciation for your time and insightful comments. Please find our response to your concerns in the following: --- **1. The name of MLLA model.** Thanks for your comment. ***Firstly***, we name the final model Mamba-Like Linear Attention because it incorporates two key designs from Mamba: the forget gate and block design. Given that there is no SSM in the final model, maybe Mamba-Inspired Linear Attention (MILA) is a better name? ***Secondly***, we are very happy to rename the model if a more suitable name is suggested. --- **2. Previous "Demystify" papers.** Thanks for the valuable suggestion. We offer discussions with these works. [1] studies the connection between local attention and dynamic depth-wise convolution, validating that dynamic depth-wise convolution can perform on-par with or slightly better than local window attention. It highlights the effectiveness of larger kernel sizes and input-dependent weights for vision tasks. [2] replaces the attention operations in Swin model families with simple linear mapping layers and shows that the macro architecture may be more responsible for high model performance. [3] proposes a unified macro architecture to identify the real gains of popular convolution and attention operators. It demonstrates that different token mixers achieves varying performance under the same macro architecture, with Halo attention and deformable convolution yield optimal results. Based on these papers and our findings regarding the forget gate and block design, we boldly make the following inferences: - ***Both macro and micro structures are important***. [1, 2] emphasize the importance of macro structure, while [3] shows that superior token mixers can enhance performance under the same macro architecture. In our study, the forget gate and block design serve as micro and macro structures, respectively, both playing significant roles. - ***Input-dependent feature, large receptive field and local bias are beneficial for effective micro token mixer designs.*** [1, 3] and our paper suggest these properties are effective. This further explains the effectiveness of MLLA, which incorporates all these features while maintaining linear complexity. **We will cite these related works and provide detailed discussion in the revised version.** [1] On the Connection between Local Attention and Dynamic Depth-wise Convolution, ICLR 2022. [2] What Makes for Hierarchical Vision Transformer? DOI: 10.1109/TPAMI. 2023.3282019. [3] Demystify Transformers & Convolutions in Modern Image Deep Networks, arXiv:2211.05781. --- **3. Training cost.** 32 GPUs are used to perform this research. Each tiny-scale model, including ablation experiments, requires around 12 hours for training on ImageNet-1K with 32 GPUs. --- **4. The position of Fig. 1.** Thanks for your question. Fig. 1 is placed on page 2 as it serves as the main figure of our paper. Considering it is cited on page 4, we will adjust its position. --- **5. The results of Mamba + Swin.** Due to time and computation resource constrains, we are unable to provide the results of Mamba + Swin architecture. However, we find that the initial version of VMamba yields pertinent findings by applying Mamba to Swin architecture in a cross-scan approach. The results are provided below. | Model | #Param | FLOPs | Acc | | :----------------------------------: | :----: | :---: | :--: | | Mamba+Swin-T (implemented by VMamba) | 22M | 4.5G | 82.2 | | MLLA-T | 25M | 4.2G | 83.5 | --- Rebuttal 2: Comment: Dear Reviewer QFdu, thank you for your insightful review and for engaging with our work. We would like to know if there are any additional questions or concerns. We are eager to engage in further discussion and provide clarification to fully address them.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful and valuable comments. We have carefully considered the reviewers' comments and provided additional clarification to address each concern. Here, we offer general responses to all reviewers on two key issues. --- **1. The motivation of our work.** This work primarily focuses on ***demystifying the key factors behind Mamba’s success***, rather than developing a series of models to achieve SOTA results. ***Our analyses and verifications form the core of this work***, with MLLA models serving to validate whether the subpar linear attention can match or even surpass the high-performance of Mamba. --- **2. Discussion with related works.** Due to recent abundance of Mamba-related works, we may have overlooked some important related works. We appreciate the reviewers for highlighting these works. We will give more credits to them and provide detailed discussions in the revised manuscript. --- **For detailed responses to individual reviewer comments, please refer to our separate responses to each reviewer.** Lastly, we would like to thank the reviewers for their time and we are welcome for any further discussion. Pdf: /pdf/545ac92a6bb085771f24dbbdb1bb9583db247ab3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Surprising Ineffectiveness of Pre-Trained Visual Representations for Model-Based Reinforcement Learning
Accept (poster)
Summary: This paper presents a study on pre-trained visual representations (PVRs) to improve sample efficiency of model-based reinforcement learning methods (MBRL). It studies a range of PVRs with different architectures, pre-training objectives and data modalities on two recent model-based algorithms: DreamerV3 and TD-MPC2. The authors propose to replace image observations with frozen PVRs features, replacing the original CNN encoder/decoder with a trainable linear layer. The paper focuses on 9 diverse continuous control tasks from 2 different domains: The DeepMind Control Suite (DMC) and ManiSkill2. The results show that the use of PVRs surprisingly does not help to improve performance and has diminishing returns compared to training encoder representations from scratch. Strengths: The paper proposes a very interesting study on the application of PVRs for two recent model-based approaches. The use of PVRs for model-based RL is a promising area of research. The performance and learning speed of model-based agents like Dreamer is very linked to the quality of representations learned by the world model. Benefiting from already pre-trained representations to reduce training time or increase final performance would be very appreciated since the training time of model-based agents can take multiple days for some advanced tasks. The paper evaluates a diverse range of pre-trained models, showing the normalized returns of each pre-trained model. The authors find that learning the encoder representation from scratch leads to better performance for DreamerV3 and TD-MPC2, contradicting expectations following previous works. The paper also experiments with out-of-distribution (OOD) versions of the environments (colors, object sizes), finding that PVRs do not help either to improve performance in this setting. The paper is clear and well written. Weaknesses: The study on the potential effect of using PVRs on MBRL methods is limited. The paper only studies the use of frozen PVRs, where the world model gradients are not back-propagated to the pre-trained weights during training. Solving RL tasks often requires the agent to pay attention to small details and learn very detailed representations, especially for DMC tasks. It would have been interesting to study the use of PVRs without frozen weights during training. As mentioned in the conclusion limitations, the authors only study the use of PVRs for continuous control tasks. It would have been interesting to study the application of PVRs to another domain based on discrete actions (Atari games, DeepMind Lab 3D environments, Minecraft...) or more aligned with data seen during PVR pre-training. Technical Quality: 3 Clarity: 3 Questions for Authors: Did the authors experiment with fine-tuning the PVRs during training instead of using frozen representations ? It would have been interesting to experiment with optimizable PVRs (without reconstruction loss in the case of DreamerV3). Maybe the use of PVRs could exempt DreamerV3 from using a reconstruction loss for learning encoder representations. The use of frozen representations may also be the cause of the observed results, all information details required to achieve good performance may not be recoverable from the compressed pre-trained representations. Did the authors experiment with reconstructing the image observations from frozen PVRs in order to study the possibility of recovering image information from representations ? Also to assess the quality of trajectories imagined by the world model. Did the authors experiment with using representations of other layers than the final layer? The usefulness of representations of pre-trained models can vary greatly depending on the chosen layer. Are the results different when using the features of inner layers or a collection of layers features ? For the in-distribution autoencoder PVR, what policy is used to generate the data used for pre-training ? Is it a random policy, a collection of policies to ensure diverse behaviors, or a pre-trained policy ? I ask the question because during training, the PVR should provide representations for the widest distribution of agent trajectories or positions. DreamerV3 has sometimes troubles to reconstruct agent situations that are seen often when training on DMC tasks. I would be ready to increase my score if additional experiments and analysis are provided by the authors. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __Questions:__ > “Did the authors experiment with fine-tuning the PVRs during training instead of using frozen representations ?” We actively decided to omit fine-tuning experiments since the original promise of many of the used representations is to perform (near) SOTA on downstream tasks without additional fine-tuning. Fine-tuning itself might not be an easy task since it can destroy parts of the representation needed for good generalization performance [1]. Ultimately, we believe that (if done right) fine-tuning can boost performance for each of the representations. But we are unsure if the performance boost would outperform training from scratch. This aligns with mixed results from end-to-end fine-tuning from the original VC-1 paper [2]. > "Maybe the use of PVRs could exempt DreamerV3 from using a reconstruction loss for learning encoder representations." We wanted to keep the MBRL algorithms (DreamerV3 and TD-MPC2) as original as possible. In DreamerV3, the reconstruction term is an integral part of the world model's loss and can neither be replaced nor omitted easily. As in the original algorithm, we utilize reconstructions so that the world model is able to include information about the state also in future predictions. More specifically, reconstruction is not only needed to learn the encoder and decoder networks, but also to learn long-term dependencies via the dynamics prediction. Besides, [3] showed that Dreamer without reconstruction results in poorer overall performance. On the other hand, similar reasons motivated us to include TD-MPC2 as an additional algorithm; it does not involve reconstructions and is more decision-aware (i.e. relying more on rewards). > “Did the authors experiment with reconstructing the image observations from frozen PVRs in order to study the possibility of recovering image information from representations ?” We think the latter suggested experiment is an interesting approach for PVRs in combination with world models. Such experiments would take a similar line as our dynamics prediction accuracy experiments. Therefore, we did not perform such experiments and would leave them for future research. Besides, recovering image information is not necessarily an indicator for good reward or task-specific dynamics prediction performance since information needed for high rewards might not be present in the representation even if the overall reconstruction quality is good. > “Did the authors experiment with using representations of other layers than the final layer? “ We conducted additional experiments during the rebuttal time by removing the transformer blocks of VC-1 partly (more specifically we ablated ⅔ and ⅓ of the 24 transformer blocks). Results can be seen in Figure 3 of the PDF to the Author Rebuttal above. Using ⅔ of VC-1 results in similar performance to the full transformer. This suggests that transformer blocks near the final one offer as much information as the final output. With only ⅓ of VC-1 the performance drops significantly. It seems that earlier representations do not offer enough information for the MBRL agent to perform better or similarly. > “For the in-distribution autoencoder PVR, what policy is used to generate the data used for pre-training ? Is it a random policy, a collection of policies to ensure diverse behaviors, or a pre-trained policy ?” We utilized data collected with a DreamerV3 policy during its training phase, including data from the exploration phases. We used an agent utilizing VC1 as an encoder. The motivation of using VC-1 as encoder was to enlarge the data distribution explored by the algorithm in comparison to a better performing agent training from scratch. This approach ensures that the autoencoder encounters a diverse data distribution similar to that of the other agents during their training, thereby ensuring also a fair comparison. __Weaknesses and Limitations:__ > "... the authors only study the use of PVRs for continuous control tasks. It would have been interesting to study the application of PVRs to another domain based on discrete actions (Atari games, DeepMind Lab 3D environments, Minecraft...) or more aligned with data seen during PVR pre-training." We were able to generate additional results with a Miniworld [4] environment which supports discrete actions (see Figure 2 in the PDF attached to Author Rebuttal above). The new evidence supports our original claim that PVRs do not generally improve MBRL training and performance. Furthermore, we contend that the ManiSkill2 experiments are well aligned with the pre-training data of many PVRs (e.g., VC-1, R3M, VIP as promoted in the associated papers), and we were also interested in exploring their out-of-distribution performance in out-of-distribution domains like DMC. [1] Kumar, A., Raghunathan, A., Jones, R. M., Ma, T., & Liang, P. (2022). Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. In International Conference on Learning Representations. [2] Majumdar, A., Yadav, K., Arnaud, S., Ma, Y. J., Chen, C., Silwal, S., … Meier, F. (2023). Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence? In Thirty-seventh Conference on Neural Information Processing Systems. [3] Hafner, D., Lillicrap, T., Ba, J., & Norouzi, M. (2020). Dream to Control: Learning Behaviors by Latent Imagination. In International Conference on Learning Representations. [4] Chevalier-Boisvert, M., Dai, B., Towers, M., Perez-Vicente, R. D. L., Willems, L., Lahlou, S., … Terry, J. K. (2023). Minigrid & Miniworld: Modular & Customizable Reinforcement Learning Environments for Goal-Oriented Tasks. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to respond in the rebuttal, After reading the paper and rebuttal, I have decided to maintain my rating. The paper does show the inability of DreamerV3 and TD-MPC to achieve comparable or superior performance when using input observations from frozen PVRs. However, I think that it would benefit a lot from additional experiments on all tasks to better understand the possible causes of these findings: analysis using different layer features, fine-tuning of PVRs weights using reward and value loss gradients, analysis of accumulated reward/dynamics error on all tasks (not just Pendulum Swingup), and possible visualization of reconstructed video trajectories. The paper does provide interesting findings and could be used as reference for future works, but I think the analysis of the potential of PVRs is still insufficient for higher rating. --- Reply to Comment 1.1.1: Title: Thank you for your feedback to our rebuttal Comment: Thank you for taking the time to discuss our rebuttal and provide your feedback. Our primary aim with this work was to benchmark the capabilities of the unchanged PVRs in MBRL and to highlight the limitations of PVRs. To this end, the type and the number of experiments conducted in our study is consistent with previous papers evaluating PVRs for policy learning (including new experiments from the rebuttal which also cover your suggested analysis of using different layer features). In addition to the reasons outlined in our rebuttal as to why more of the suggested experiments are out of scope of this paper, we argue that adding even more experiments also risk compromising the depth and quality of all aspects in the paper, especially since our paper has already reached the maximum page limit. Furthermore, conducting such an extensive study as ours involves significant effort as well as computational resources. By especially highlighting the limitations in our findings, we hope to save other researchers time and provide valuable insights for those exploring zero-shot PVR capabilities in future work. As also highlighted by yourself we also strongly believe that our work will serve as a useful reference point for other researchers.
Summary: This paper contains an extensive study on the potential benefits of pre-trained visual representations (PVRs) for model-based reinforcement learning (MBRL). Given the success of PVRs for model-free RL (MFRL), there are reasons to believe that PVRs are equally beneficial for MBRL, something that has not yet been explored to the same degree. Surprisingly, tests on two different RL architectures in combination with at least eight different PVR variants indicate that PVR-based solutions rarely perform better than representations learned from scratch when applied for MBRL. In most cases, it seems preferable not to use PVRs for MBRL, which should come as a surprise to many, given recent advances in applications based on PVRs. Strengths: Given the recent development of PVRs and the increasing number of successful applications of PVRs, it is interesting to know what the opportunities and limiting factors are for PVRs when adopted for training agents using reinforcement learning. A study with this as a focus, such as the one presented in this paper, should interest many. Besides the most important conclusion drawn in the presented study, i.e. the fact that the benefits of PVRs for MBRL can be questioned, it contains several experiments to paint a more complete picture. It is concluded that the sample efficiency of MBRL agents rarely improves when supported by PVRs compared to those with representations learned from scratch. PVRs do not seem to make agents more robust of out-of-distribution (OOD) conditions either. It is also shown that the dynamics and reward prediction errors of RL agents are similar regardless of whether representations are learned from scratch or with PVRs. Also interesting is a study on the benefits of different properties of PVRs for generalization, where the most important property was shown to be data diversity during training, which might not be that surprising. The fact that language conditioning and the choice of transformer architecture have such a little impact might be more unexpected though. Weaknesses: From the experiments, it can be concluded that PVRs hardly benefit MBRL, but the paper does not really try to answer why there is a difference between MBRL and MFRL in this regard. Could it be that the way DreamerV3 and TD-MPC2 are modified in this study, the PVR modules $g_*$ are trained to preserve semantic information, while in effect also suppressing spatial information? Earlier studies on PVRs for MFRL, cited in the paper, do not seem to do this but instead keep a path that allows a policy to be trained with some spatial information left intact. The authors seem surprised that language conditioning is not good for generalization, since it should provide semantically relevant features. Such a claim completely misses the fact that for e.g. manipulation it is often more important for a system to know where an object is located, how it is oriented and how large it is, than what class the object belongs to or any other semantic information. Most of our visual representations are explicitly trained to be invariant to transformations that might be particularly important for the task given to the agent. It is worth noting that the experiments were conducted in simulation, not on real robots. The paper does not draw conclusions that go beyond what can be observed in the experiments, but the differences between simulated and real-world conditions are not really regarded. There are other potential benefits of MBRL when applied to real robots. With MBRL any action executed on the system can be exploited for pre-training, while MFRL is typically focused on only one particular type of task. It might be that this would not really affect the conclusions as expressed in the paper, but the experimental conditions between simulations and real-world experiments can be very large, The same is true regarding conclusions drawn from the OOD experiments. In the paper, the nature of randomizations is the same for both the ID and OOD sets, even if the two sets don’t overlap. If you were to do the experiments in the real world, the conditions would be very different. Technical Quality: 3 Clarity: 3 Questions for Authors: * What is believed to be the reason why the benefits of PVRs for MBRL seem so limited, in particular, given earlier success for MFRL? * To what extent do the tested PVRs preserve spatial information relevant for tasks such as manipulation? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The fact that more experiments are needed to draw final conclusions is highlighted as a limitation, which is true indeed given the diversity of domains for which MBRL can be used and the computational demands for experiments in each such domain. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __Weaknesses:__ > "From the experiments, it can be concluded that PVRs hardly benefit MBRL, but the paper does not really try to answer why there is a difference between MBRL and MFRL in this regard." We believe that multiple objective mismatches in the different training phases, which are not all present in MFRL, result in a difference. We already discussed this in the first bullet point of the Author Rebuttal above and will include this in the revised paper. > “Could it be that the way DreamerV3 and TD-MPC2 are modified in this study, the PVR modules 𝑔∗ are trained to preserve semantic information, while in effect also suppressing spatial information?” Previous studies (described in the related work) sometimes use the PVRs in combination with additional task information (like proprioceptive or spatial information) or use the output of the PVRs only. Since we wanted to test the capabilities of the PVRs only, we refrain from using additional task information as input for the MBRL agents since it is hard to measure to what extent the PVRs or the additional information are used to solve the task. We are unsure why the reviewer believes that we actively suppress spatial information since we adapt DreamerV3 and TD-MPC2 only slightly? Many of the analyzed PVRs learn spatial information (e.g. VC-1, R2D2, Taskonomy Autoencoder). > “The authors seem surprised that language conditioning is not good for generalization, since it should provide semantically relevant features.” We would like to thank the reviewer for drawing our attention to this misunderstanding. We never wanted to convey this opinion but wanted to bring attention to the fact that many other approaches build up on language without utilizing additional spatial information (e.g. R3M and those mentioned in our paper). These papers utilize language in their approaches, asserting that it significantly enhances performance. We wanted to emphasize this perspective. We will revise this part of our paper for the final version. > “It is worth noting that the experiments were conducted in simulation, not on real robots. The paper does not draw conclusions that go beyond what can be observed in the experiments, but the differences between simulated and real-world conditions are not really regarded.” We appreciate the suggestion to train on real robots and will include this as an additional limitation in the paper. Conducting experiments on real robots is computationally demanding and time-consuming, particularly since our approach involves reinforcement learning rather than imitation learning. On the other hand, our approach was to benchmark the touted zero-shot capabilities of pre-trained visual models, rather than to devise a high-performant (MB)RL agent which performs well in real world tasks. __Questions:__ > “What is believed to be the reason why the benefits of PVRs for MBRL seem so limited, in particular, given earlier success for MFRL?” We need to train a model of the environment as well as a reward predictor with those representations. The information preserved by the representations might not be enough to do both. (See also the related answer from the first weakness and Author Rebuttal above.) > “To what extent do the tested PVRs preserve spatial information relevant for tasks such as manipulation?” This individually depends upon the training loss of the PVR. While R3M and all versions of CLIP use semantic information via language, the losses of the other PVRs allow learning spatial information implicitly. __Limitations:__ > "The fact that more experiments are needed to draw final conclusions is highlighted as a limitation, which is true indeed given the diversity of domains for which MBRL can be used and the computational demands for experiments in each such domain." We added an additional task from another environment domain (i.e. PickupObjects-v0 from Miniworld [1]). You can see the results in Figure 2 in the PDF attached to the Author Rebuttal at the top. The new evidence supports our original claim that PVRs do not generally improve MBRL training and performance. [1] Chevalier-Boisvert, M., Dai, B., Towers, M., Perez-Vicente, R. D. L., Willems, L., Lahlou, S., … Terry, J. K. (2023). Minigrid & Miniworld: Modular & Customizable Reinforcement Learning Environments for Goal-Oriented Tasks. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. --- Rebuttal 2: Comment: This reviewer wants to thank the authors for their informative comments on the points raised in the review. It is not suggested that the authors actively try to suppress spatial information. The question is rather why there is a difference between MFRL and MRBL with respect to the benefits of PVR. This reviewer suggests that if $g^*$ captures more semantic than spatial information, which might not be the case, there is no way for lost spatial information to be recovered, using the architectures in Figure 1. There might or might not be individual variations in implementations in terms of architectures and PVRs that better explain observed differences than any fundamental difference between MFRL and MRBL. --- Rebuttal Comment 2.1: Title: Thank you for your valuable comment Comment: We thank the reviewer for the valuable comment and the helpful participation in the discussion. We agree that it is hard or even impossible to recover spatial information from semantic information (and vice versa) inside the world model/agent using the embedding of the PVRs only. Since it would be an individual paper to measure what kind of information is captured by the PVRs exactly, it is out of scope for our paper to analyze this more deeply. On the other hand, we believe that the difference in PVR performance between MFRL and MBRL is not explained solely by the spatial or semantic information captured (or not captured) by the PVRs, as the type of information needed (spatial or semantic or both) ultimately depends on the task. Therefore, missing coverage of semantic and spatial information in the PVRs should affect MFRL and MBRL algorithms equally. Regarding the individual variations: We categorized the PVRs and their scores according to different characteristics of the PVRs in Section 4.3 of our paper to measure the effect of different implementation details and variations. We have already included a large and diverse set of PVRs in our study, incorporating different architectures, loss types, data modalities, and more. If the reason for our findings was related to one of these implementation details, we would likely have observed it in our results. Furthermore, the architectures used in MFRL and MBRL are quite similar, as most methods utilize CNNs, MLPs, and LSTMs/GRUs, particularly in the benchmarks we reference. DreamerV3 and TD-MPC2 also employ these architectures. We think (based on our results in the paper and the rebuttal) that the aforementioned performance difference stems from the observation that reward-related information is not sufficiently captured in the representations to learn accurate reward prediction models (as needed by MBRL algorithms in contrast to MFRL methods).
Summary: This paper conducts a thorough set of experiments to evaluate the performance of pretrained visual representations (PVR) in model-based reinforcement learning. Empirical results show that PVR performs worse than learning from scratch, which is possibly due to the large reward prediction error. Strengths: - This paper shows how pre-trained visual representations perform in MBRL, which has been relatively unexplored in previous research. - A thorough set of experiments reveals that pre-trained representations perform worse than learning from scratch, which is surprising and could inspire future research to analyze this phenomenon further. Weaknesses: - This paper gives some possible reasons for the performance degradation of pre-trained representations. However, it lacks sufficient evidence to further support the claims. For example, Why does the reward prediction accuracy outweigh the dynamic prediction accuracy? Why do pre-trained representations lead to information loss about reward prediction? I encourage the author to dig deeper into the specific properties of pre-trained representations used in MBRL. Technical Quality: 2 Clarity: 2 Questions for Authors: - I think the performance degradation of PVR is mainly due to the fact that they are not trained on the experimental data used in this paper, as Autoencoder performs similarly compared to learning from scratch. Therefore, could you fine-tune the pre-trained visual encoder and then evaluate the performance? I guess slight fine-tuning can boost the performance a lot. - Why does VC-1 perform almost the best in maniskill (see figure 3 and figure 4)? Is VC-1 specifically pre-trained on these manipulation data? - Another possible reason is that models pre-trained on large datasets are usually hard to transfer to low-data regimes. See UniSim [1] for details. [1] Yang et al., Learning Interactive Real-World Simulators, ICLR 2024 Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations have been discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __Weaknesses:__ > “Why does the reward prediction accuracy outweigh the dynamic prediction accuracy?” Our Figures 6 and 7 show that there exist differences in the dynamics as well as reward prediction accuracies between the trained models. The calculated correlations indicate that more subtle differences in reward predictions compared to dynamics predictions lead to poorer performing policies. Even though all models learn to predict the dynamics and are incentivized to do so via the losses of DreamerV3 and TD-MPC2, the best measure of task performance is the reward. Thus, its prediction quality is intuitively stronger correlated to task performance (as shown in our paper with the mentioned correlations). Since the predictions are directly used to update the policy, predictions must be accurate. > “Why do pre-trained representations lead to information loss about reward prediction?” We argue that those representations lose information about rewards during pre-training since they are most often trained with different training objectives than later used during the MBRL phase. That such an objective mismatch influences the training was already shown for MBRL itself, as MBRL methods like Dreamer and TD-MPC need to learn the dynamics and a policy with conflicting objectives. The additional mismatch in objectives between PVRs and MBRL is now an additional factor. In general, the PVRs are trained to compress the given information content (i.e. information bottleneck theory) and are not incentivized to capture information in the images relevant for a possible reward function. __Questions:__ > “Therefore, could you fine-tune the pre-trained visual encoder and then evaluate the performance?” We actively decided to omit fine-tuning experiments since the original promise of many of the used representations is to perform (near) SOTA on downstream tasks without additional fine-tuning and we wanted to examine these claims. Fine-tuning is therefore not the focus of our paper. Furthermore, correctly fine-tuning might not be an easy task since it can destroy parts of the representation needed for good generalization performance [1]. Ultimately, we believe that (if done right) fine-tuning can boost performance for each of the representations. But we are unsure if the performance boost would outperform training from scratch. This aligns with mixed results from end-to-end fine-tuning in the original VC-1 paper [2]. Furthermore, fine-tuning before the RL phase would also result in an unfair advantage compared to training the representation from scratch. But we believe that the question of how to adequately fine-tune such representations for MBRL and RL in general is nevertheless important and should be the topic of future research. > “Why does VC-1 perform almost the best in maniskill (see figure 3 and figure 4)? Is VC-1 specifically pre-trained on these manipulation data?” VC-1 was trained on a combination of datasets with a connection to manipulation and navigation. More specifically, VC-1 uses Ego4D and similar human manipulation as well as navigation data (but never robot manipulation data). We hypothesize that in this case the domain gap is smaller for ManiSkill2 due to the connection of the pre-training data to manipulation. Therefore, a combination of manipulation related data seems to be somewhat beneficial; even if it's observations of humans. > “Another possible reason is that models pre-trained on large datasets are usually hard to transfer to low-data regimes. See UniSim [1] for details.” We thank you for pointing out this important part of UniSim, since it is relevant for our conclusion. The paper states "During joint training of the UniSim on diverse data, we found that naïvely combining datasets of highly varying size can result in low generation quality in low-data domains.". As already mentioned, VC-1 uses a curated combination of datasets which supports this claim of UniSim. [1] Kumar, A., Raghunathan, A., Jones, R. M., Ma, T., & Liang, P. (2022). Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. In International Conference on Learning Representations. [2] Majumdar, A., Yadav, K., Arnaud, S., Ma, Y. J., Chen, C., Silwal, S., … Meier, F. (2023). Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence? In Thirty-seventh Conference on Neural Information Processing Systems. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: After reading your rebuttal, I've decided to maintain my rating. It is still hard to determine the accurate answer to why PVR is useless for MBRL. I believe that future research should focus on how to properly incorporate PVR for better performance, like in many other machine learning scenarios. However, this paper is still limited to providing some possible approaches to addressing this challenge. I encourage the authors to study this problem further and dig deeper. That will be a high-quality paper. --- Reply to Comment 1.1.1: Title: Thank you for your feedback to our rebuttal Comment: Thank you for your feedback. We appreciate your encouragement and look forward to building on this work in future studies. Our primary goal with this paper is to draw attention to the issue and highlight the limitations of PVRs in MBRL. We agree that finding effective ways to incorporate PVRs for better performance is crucial, and we plan to address this in our future research. Attempting to tackle both the identification of the problem and its solution in a single paper would have risked compromising the depth and quality of each aspect. We think this paper alone is valuable to the community as it serves as a useful reference point for future research on the development of PVRs for generalist embodied agents. Furthermore, the presentation of such a paper at NeurIPS is important as it encourages critical thinking and discussion about our current approaches in developing PVRs.
Summary: Pre-trained Visual Representations (PVRs) has been widely applied to many domains to improve OOD generalization and sample efficiency, including model-free reinforcement learning (RL). This paper explores the application of PVRs in model-based RL, which has not been done. Experiments on two suite of simulated control environments (Deepmind control and Maniskill2) show that current PVRs are outperformed by representations learned from scratch in terms of sample efficiency and OOD generalization. The paper further analyzes the reward error and dynamics prediction error of various PVRs and provides meaningful insights. Strengths: - The paper is well written and easy to follow. The research questions that the paper investigate are clearly stated. The proposed benchmark is well-motivated. - The proposed benchmark has great potential in advancing research in developing better PVRs for MBRL. - The paper provides sufficient detail for the experiments allowing easier reproduction of the results. - The experiments includes a large suite of popular PVRs, improving the credibility of the conclusions. Weaknesses: - The number of algorithms in each property category is small, which means that the performance of one algorithm can easily affect the average performance of the category, making the comparison of different categories (Figure 5) less significant. For example, the "sequential data" category and the "temporal loss" category only differ in one method (VC-1), and attributing the difference caused by this one method to the sequential data property might not be correct, since there are many other differences between VC-1 and R3M/VIP. Technical Quality: 3 Clarity: 3 Questions for Authors: - Different tasks can have very different reward functions even for the same environment, and we have no clue on what the reward function is when pre-training self-supervised visual representations. Then, why should we expect a single PVR to perform well on learning good reward models for all kinds of tasks? In other words, is the task of building PVRs that can learn good reward models a somewhat intractable one? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The proposed benchmark only evaluated on two suite of simulated control environments, which, as the authors also pointed out in the paper, is (1) somewhat insufficient, and (2) very challenging for the PVRs concerned because many of them are never trained on similar data distributions (especially for DMC). Therefore, saying that PVRs are "ineffective" or MBRL seems like an overstatement to me. Strong claims need strong evidence and the evidence provided in the paper is in my opinion insufficient. Of course I still appreciate the value of the proposed benchmark in that it brings the problem of applying PVRs to MBRL to the community's attention, but I do not agree with putting such an assertive statement as the title of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __Weaknesses:__ > “The number of algorithms in each property category is small, which means that the performance of one algorithm can easily affect the average performance of the category, making the comparison of different categories (Figure 5) less significant. For example, the "sequential data" category and the "temporal loss" category only differ in one method (VC-1), and attributing the difference caused by this one method to the sequential data property might not be correct, since there are many other differences between VC-1 and R3M/VIP.” We integrated the mentioned categories “Sequential Data” and “Temporal Loss” since VC-1 is trained on sequential data but does not make use of this characteristic. While we agree that your statement is true for these two categories, the PVRs of the other categories represent a more diverse set of representations. In the other categories, especially “ViT” and “Diverse Data”, the performance differences are significant. Therefore, we will revise our paper accordingly and will remove the category “Temporal Loss” from our findings since the overlap between this category and “Sequential Data” is too large. Thank you for this tip. __Questions:__ > “Then, why should we expect a single PVR to perform well on learning good reward models for all kinds of tasks?” This expectation was established by various other papers referenced in our relevant work and by representations like VC-1, VIP, etc. We do not want to advocate the use of PVRs in MBRL per se. Our work is a benchmark of PVRs following the zero-shot idea established by previous works on MFRL and imitation learning. These works argue that PVRs can be helpful in downstream reinforcement learning tasks because the representations saw vast amounts of different data during training and/or were trained with losses capturing some form of value. The reason why we created the presented benchmark is precisely to challenge these claims. We do not make any claims regarding the capability of the investigated PVRs to capture features relevant for arbitrary reward models for all kinds of tasks. Our results highlight that there is not a single PVR that outperforms from scratch training in every environment. > “In other words, is the task of building PVRs that can learn good reward models a somewhat intractable one?” It is most likely intractable to train a single PVR capturing reward information for all possible tasks. But our results suggest that e.g. diverse but curated datasets might help for specific tasks. VC-1 especially shows that a combination of diverse datasets taking e.g. manipulation aspects into account is an important element for a PVR that is used for manipulation tasks. However, VC-1 still cannot solve all tasks as effectively as learning a representation from scratch with MBRL (i.e. solving DMC tasks). Furthermore, both learning phases (pre-training of representations and downstream RL policy training using such PVRs) are heavily influenced by an objective mismatch since the pre-training objective and the RL training objective usually differ much. PVRs are not incentivized to capture reward related information from the images and might discard that information during learning whereas RL training objectives are driven by a reward function (see UMAPs in Figure 1 in PDF of Author Rebuttal). __Limitations:__ > “...because many of them are never trained on similar data distributions” Indeed, the PVRs are often not trained on data distributions similar to our experiments. But we want to mention that many of the original papers of the PVRs used actually claim that their representations work on DMC and robotics environments like ManiSkill2 or even provide experiments supporting those claims (see VC-1, VIP, etc.). E.g. VIP was trained on Ego4D only. The related paper [1] shows that algorithms using VIP can solve robotic manipulation tasks. VC-1 was trained on diverse control data and was shown to work on DMC as well. The same paper includes experiments showcasing that VIP and R3M enable RL policies to be SOTA in DMC. The Taskonomy representations were used to solve tasks of the VizDoom environments and robotic manipulation tasks. We used our presented environments to show that those claims do not hold for MBRL. > “Of course I still appreciate the value of the proposed benchmark in that it brings the problem of applying PVRs to MBRL to the community's attention, but I do not agree with putting such an assertive statement as the title of the paper.” We understand your concerns about the assertiveness of our claims and title. With an additional navigation experiment from the Miniworld [2] environment we will improve the paper to temper these claims, ensuring they more accurately reflect the evidence presented. You can see those results in Figure 2 in the PDF of the Author Rebuttal above. The new evidence still supports our original hypothesis. Our intention in choosing a somewhat provocative title is to highlight the above-mentioned potential limitation of current PVR approaches when applied to MBRL (as you mention yourself) and to challenge the common assumption that PVRs *generally* improve RL training. We do not claim that PVRs are worse in every environment, but many PVRs are presented as general foundation models, implying they should perform well on DMC as well. However, they do not, which is reflected in our title. [1] Ma, Y. J., Sodhani, S., Jayaraman, D., Bastani, O., Kumar, V., & Zhang, A. (2023). VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training. In The Eleventh International Conference on Learning Representations. [2] Chevalier-Boisvert, M., Dai, B., Towers, M., Perez-Vicente, R. D. L., Willems, L., Lahlou, S., … Terry, J. K. (2023). Minigrid & Miniworld: Modular & Customizable Reinforcement Learning Environments for Goal-Oriented Tasks. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my questions and concerns. I increased the confidence score to 4. Considering the overall contributions and limitations of the paper (including ones that the other reviewers pointed out), I will keep my other ratings. --- Reply to Comment 1.1.1: Title: Thank you for increasing your confidence score Comment: Thank you for taking the time to review our paper and for increasing the confidence score. We value your feedback and are grateful for your insights. We will integrate your suggestions, as discussed in the rebuttal, in the final version of the paper.
Rebuttal 1: Rebuttal: We thank the Reviewers for their thorough feedback. We appreciate the detailed suggestions and the recognition of the value that our results bring to the community. We were keen to integrate the Reviewers valuable feedback into our paper and will make the revisions to the paper accordingly. Here is a summary of changes and answers: - Reviewer HgP2 raised questions regarding the underlying causes of the poor performance in reward prediction. We ultimately believe that this is due to multiple objective mismatches which are present between the pre-training of the visual representations (PVRs) and the actual MBRL training phase, as well as in MBRL itself where we have a mismatch between dynamics learning and policy learning [1]. Especially, the first mentioned objective mismatch complicates the transfer of reward information, since PVRs are trained to compress information (information bottleneck theory) and as such the training might not capture reward-relevant data from the pre-training images. On the other hand, MBRL methods like DreamerV3 and TD-MPC2 heavily rely on reward information in their objective. We added UMAP embeddings (see Figure 1 of the attached PDF) showing that reward information is more closely embedded in representations which are learned by the agents completely from scratch. Regarding previous results we additionally want to mention that reward information was often irrelevant in previous benchmarks based on imitation learning. - We got several reviewer questions regarding fine-tuning some of the PVRs on task-specific data (HgP2;32eA). Since we wanted to test the touted zero-shot capabilities of the PVRs, we consciously decided against additional fine-tuning experiments. Fine-tuning is therefore not a focus of our paper. Furthermore, since fine-tuning causes its own difficulties and challenges (e.g. it can distort features [2]), we want to leave those research questions for future research. - Reviewer 32eA suggests ablating the PVRs as it was already done in previous benchmarks. Therefore, we conducted an experiment with DreamerV3 and VC-1 removing ⅓ and ⅔ of the 24 transformer blocks of VC-1. Results can be seen in Figure 3 of the attached PDF. Outputs of earlier transformer blocks do not exceed the performance of the full representation. - One way to gain further insights is to expand task domains beyond DMC and ManiSkill2 as also pointed out by reviewers mY2R and DQ5X. We already acknowledged this in our limitations. Both mentioned domains support continuous actions only. We now included an additional navigation experiment from Miniworld [3]. Due to the discrete action-space and the short rebuttal time, we were only able to perform the experiment with DreamerV3 and a selected number of PVRs. TD-MPC2 does not support discrete action spaces. The results can be seen in Figure 2 of the attached PDF. The new evidence supports our original claim that PVRs do not generally improve MBRL training and performance. Similar to the other experiments, DreamerV3 agents trained from scratch are more sample efficient and performant compared to agents using PVRs. Only VC-1 is able to perform comparably. On the other hand, Reviewers highlighted the clarity and thoroughness of our paper (mY2R;32eA), the novel exploration of PVRs in MBRL (HgP2), and the well-motivated benchmark (including out-of-distribution evaluation) that advances research in this area (mY2R;32eA). Reviewer DQ5X highlighted that such a “study with this as a focus, such as the one presented in this paper, should interest many.”. The reviewers also valued our surprising findings that pre-trained representations often perform worse than learning from scratch (HgP2;32eA;DQ5X) and the large number of PVRs analyzed (mY2R;32eA). We sincerely hope that the changes and answers have addressed reviewer concerns. If so, we kindly request that you consider revising your scores. Please let us know if you have any further comments! [1] Lambert, N., Amos, B., Yadan, O., & Calandra, R. (2020). Objective Mismatch in Model-based Reinforcement Learning. In Learning for Dynamics and Control (pp. 761–770). PMLR. [2] Kumar, A., Raghunathan, A., Jones, R. M., Ma, T., & Liang, P. (2022). Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. In International Conference on Learning Representations. [3] Chevalier-Boisvert, M., Dai, B., Towers, M., Perez-Vicente, R. D. L., Willems, L., Lahlou, S., … Terry, J. K. (2023). Minigrid & Miniworld: Modular & Customizable Reinforcement Learning Environments for Goal-Oriented Tasks. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Pdf: /pdf/1bc2a978dcb4bd3051b51b7caea917daa64a8c47.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Can Graph Neural Networks Expose Training Data Properties? An Efficient Risk Assessment Approach
Accept (poster)
Summary: The paper introduces an improved method for conducting property inference attacks on graph neural networks (GNNs), particularly focusing on reducing the computational overhead associated with traditional approaches that rely on numerous shadow models. The authors propose a model approximation technique to generate a sufficient number of approximated models for attacks, without the need for extensive retraining. They introduce a diversity-enhancing mechanism to ensure the effectiveness of the approximations. Experiments demonstrate notable improvements in attack accuracy and efficiency compared to existing methods, validating the proposed approach's effectiveness and efficiency. Strengths: - Steady theretical support - Well-written paper - Novel combination of different algorithms Weaknesses: - Lack of details and certain explanations - Weak experimental settings Technical Quality: 3 Clarity: 3 Questions for Authors: - This paper is well-structured and well-written. I particularly appreciate the authors' efforts in providing thorough theoretical explanations and introducing innovative approaches. Replacing the training of numerous shadow models with model approximation is a significant and impactful contribution, effectively enhancing the efficiency of general property inference attacks. - This paper assumes that the target and auxiliary graphs are splits of the same original graph, which is an impractical assumption in real-world scenarios. It would strengthen the paper's applicability if the authors used distinct network graphs (yet from the same dimain) for the target and auxiliary graphs. - When comparing training efficiency with state-of-the-art methods, it is unclear whether the time required to generate diverse approximated models and calculate error is included. Including such details would enhance the transparency and reliability of the comparisons. - While the paper presents improvements over baselines, it does not thoroughly discuss why the proposed method might outperform baselines by a big margin. Although it is clear that using a few approximated models in place of hundreds of shadow models contributes to efficiency, the reasons why it is much more effective than the baselines need to be further discussed. - The choice of property attributes inferred in this study (e.g., # male users > # female users, or # publications with "IS" > # publications without "IS") seems kind of limited. The significance of this work would be enhanced if the authors expanded their experiments to include the inference of a broader variety of node or link properties such as the portion of different kinds of nodes or links. - I would be pleased to increase my score if the authors address these issues, which would substantially enhance the paper's contribution to the field. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q2: Experiment using distinct network graphs for the target and auxiliary graphs. Thank you for your insightful comment. We select two social network datasets, Facebook and Pokec. We consider two cases: Facebook and Pokec as the target and auxiliary graphs respectively, and vice versa. We maintain other settings unchanged and report the attack accuracy and runtime below. While the overall attack performances decline, our model consistently performs best with significant speed-ups. Facebook $\Rightarrow$ Pokec (Node property) | | GPIA | PEPIA-DS | PEPIA-S | AIA | Ours | |---------|------|----------|---------|-----|----------| | **Accuracy** | 56.7 | 54.5 | 57.6 | 53.4| **58.3** | | **Runtime(s)** | 1732 | 1794 | 1777 | 1729| **267** | Facebook $\Rightarrow$ Pokec (Link property) | | GPIA | PEPIA-DS | PEPIA-S | AIA | Ours | |---------|------|----------|---------|-----|----------| | **Accuracy** | 54.6 | 56.8 | 52.1 | 53.0| **57.3** | | **Runtime(s)** | 1569 | 1648 | 1609 | 1535| **236** | Pokec $\Rightarrow$ Facebook (Node property) | | GPIA | PEPIA-DS | PEPIA-S | AIA | Ours | |---------|------|----------|---------|-----|----------| | **Accuracy** | 60.2 | 62.4 | 60.3 | 64.6| **65.7** | | **Runtime(s)** | 1244 | 1297 | 1262 | 1237| **233** | Pokec $\Rightarrow$ Facebook (Link property) | | GPIA | PEPIA-DS | PEPIA-S | AIA | Ours | |---------|------|----------|---------|-----|----------| | **Accuracy** | 57.5 | 55.9 | 56.1 | 55.6| **59.3** | | **Runtime(s)** | 1296 | 1385 | 1342 | 1321| **177** | >Q3: Details of runtime. Thank you for this valuable suggestion. We will update our manuscript accordingly. The reported runtime covers the entire attack process, including generating approximated models and calculating errors. As an example, We provide a runtime analysis of the attack on Facebook’s node property: |Task|Time(s)| |-|-| | Sampling reference graphs| 12 | | Training reference models | 74 | | Generating augmented graphs, computing errors and diversity, and selecting augmentations | 55 | | Generating approximated models | 103 | | Inferring target graphs' properties | 10 | | Total | 254 | >Q4: Discussion on the performance. Thanks for your reminder. The diversity of graphs used to generate parameters or posteriors is crucial for training a strong attack model [1,2]. Our approach involves specific mechanisms to ensure diversity in both reference and augmented graphs, greatly boosting attack performance as shown in our ablation study. In contrast, conventional attacks do not incorporate specific designs for the diversity of shadow graphs, thus leading to sub-optimal performance. We will incorporate this discussion into our paper. >Q5: Experiments on other properties. As suggested, we evaluated our approach on the OGBN products dataset [3]. Products are categorized into consumer goods and non-consumer goods based on label descriptions. We define the node property as the proportion of non-consumer goods in the target graphs: 35% (original) or 65% (higher); other settings maintain consistency with our manuscript. The table below presents our results compared to the best baseline. Our attack achieves superior performance with speeds of 12.0× faster, showing its effectiveness across a broader range of properties. | **Method** | **Accuracy** | **ROC-AUC** | **Runtime(s)** | |------------|--------------|-------------|----------------| | PEPIA-DS | 92.3 | 88.5 | 34881 | | Ours | **93.2** | **89.1** | **2918** | [1] Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations [2] Formalizing and Estimating Distribution Inference Risks [3] https://ogb.stanford.edu/docs/nodeprop/#ogbn-products --- Rebuttal 2: Comment: Dear Reviewer, We appreciate your recognition of the importance of our work and the time you took to provide a detailed review. We would like to confirm that we have addressed all your concerns about our submission. If there is anything more we can do to help you in evaluating our paper, please don't hesitate to let us know. Best regards, Authors --- Rebuttal 3: Comment: As we are nearing the end of the discussion phase, we would like to kindly inquire if you have any remaining questions about our paper. We would appreciate any feedback and would be glad to discuss it further.
Summary: This paper delivers an effective way to improve the efficiency of property inference attacks for GNNs. Instead of training numerous shadow models, the authors propose to train a few reference models and use an efficient approximation method to obtain other shadow models trained on slightly augmented shadow graphs. Experimental results show that the proposed property inference attack achieves better performance in a shorter time. Strengths: - To me, this paper finds an interesting application of machine unlearning in reducing the time complexity of training shadow models in inference attacks. - The experimental results show a desirable performance of the proposed method over baseline attack methods in both efficacy and efficiency. - The paper is overall well-organized and easy to follow. Weaknesses: - The scope and significance of this paper are somewhat limited. The main contribution of this paper is to improve the efficiency of a specific inference attack model. It would be helpful to provide more discussions on the new attack algorithm, e.g., insights on defending against the new attack method. - The technical novelty is limited. In my opinion, the technical part of this paper is pretty much a certified graph unlearning method, which has been studied in previous literature [1,2,3,4]. I don't find a distinct improvement or new contribution of the technical part compared with [1,2,3,4]. - The inverse Hessian in influence computation is seen as highly consuming. It would be helpful to add some complexity analysis of the algorithm. [1] Wu, Kun, et al. "Certified edge unlearning for graph neural networks." KDD 2023. [2] Pan, Chao, Eli Chien, and Olgica Milenkovic. "Unlearning graph classifiers with limited data resources." TheWebConf 2023. [3] Chien, Eli, Chao Pan, and Olgica Milenkovic. "Efficient model updates for approximate unlearning of graph-structured data." ICLR 2023. [4] Wu, Jiancan, et al. "Gif: A general graph unlearning strategy via influence function." TheWebConf 2023. Technical Quality: 2 Clarity: 4 Questions for Authors: The theoretical analysis still relies on the convexity assumptions. To satisfy the assumptions, it would be feasible to use a linearized reference model. It is not clear which type of GNN is used as the reference GNN. Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: The authors have included detailed discussions on the societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1: Limited scope and significance. Thanks for your comment. We'd like to emphasize that inefficiency is a major bottleneck in graph property inference attacks. Our contribution significantly enhances the efficiency of these attacks, enabling their practical application at scale, which is essential for both academic research and real-world use. Specifically, existing attacks can take over 3 days on graphs with more than 1,000,000 nodes using 700 shadow models [1], making them practically infeasible. Our method can reduce this to a few hours. Even on medium-large graphs with 200,000 nodes, current attacks take up to 7.5 hours, while our attack is over 10x faster, requiring less than one-tenth of the shadow models trained from scratch. Additionally, our attack also delivers superior performance. We also acknowledge the importance of discussing defensive methods against these more efficient attacks. Our attack can facilitate the faster development of defense strategies. Effective defense hinges on its ability to counteract attacks, and a more effective and efficient attack can serve as a tool to refine defense strategies. For example, adversarial learning [2] iteratively adjusts both defense and attack strategies to minimize the success rate of attacks. Due to the inefficiency, adopting conventional property inference in adversarial learning may lead to poor defense efficiency. In contrast, our attack has the potential to accelerate the defense process, enhancing its feasibility. >W2: Technical novelty is limited. Thank you for your comment, which inspired us to clarify our contribution and the differences between our technique and existing methods. First, our approximation technique differs from existing graph unlearning methods in two key ways: (1) Unlike existing graph unlearning methods, which are restricted to removing either nodes, edges, or features individually [3,4], or are limited to specific GNN architectures [5,6], our method enables the removal of both nodes and edges across any GNN architecture. This broad applicability significantly enhances its utility in creating diverse augmented graphs, thereby improving attack performance across a wide range of target GNNs. (2) Unlike existing approaches that rely on predefined removals, we introduce a novel selection mechanism for determining which nodes and edges to remove, ensuring diverse augmented graphs and accurate approximation. This facilitates the creation of a strong attack model. Our selection comprises three parts: - To measure the accuracy of model approximations, We derive a theoretical error estimation with innovative mathematical derivations, which is computable given the removal. As we remove both nodes and edges with generic GNNs, existing analyses [3, 5, 6] can not directly apply. - To measure the total diversity of graphs through various removals, we propose using the efficient edit distance as the metric. - To minimize approximation error while maximizing total diversity, we formulate this optimization as a quadratic integer programming problem, which is efficiently solvable. Furthermore, we emphasize that the biggest contribution of this work is improving the efficiency of graph property attacks, enabling practical application (please refer to our response in W1). To achieve this, we designed two novel technical components: (1) the diverse sampling of reference graphs, which is beyond the scope of unlearning, and (2) the selection of augmented graphs with model approximation, which differs from existing unlearning in both the type and selection of removals. These components form the core technical contributions of this work. > W3: Computation of inverse Hessian. The complexity of model approximation is relatively light. As the gradient can be easily computed by Pytorch Autograd Engine, the main operation is solving the inverse of Hessian. We follow [3] to reduce this to linear complexity by converting inverse computation into finding a unique minimizer of a quadratic function. With the efficient hessian-vector product and the conjugate gradient method (CG), this can be solved with $O(t|\theta|)$ time complexity [3], where $|\theta|$ denotes the number of parameters and $t$ denotes the iteration number in CG. In experiments, our attack performs well with few iterations. We will update our paper to include these computational details. >Q1: Convexity assumptions. Thank you for your insightful comment. To the best of our knowledge, no generic unlearning approach with theoretical guarantees has successfully removed the convexity assumption, which is a challenging task. Due to the non-convex nature of GNN models, the Hessian can become non-invertible. To address this, we adopted a common solution of introducing a damping term to the Hessian, which has proven effective in [3,7]. In our experiments, reference GNNs use the same architectures as the target models, including GraphSAGE, GCN, and GAT. To better align with theoretical assumptions, we also test on SGC, a linear GNN, with 2 hops. We compare attack performance and runtime on Facebook's node property, with results detailed below. ||**GPIA**|**PEPIA-DS**|**PEPIA-S**|**AIA**|**Ours**| |-|:-:|:-:|:-:|:-:|:-:| |**Accuracy**|58.7|60.3|61.0|59.3|**62.5**| |**Runtime(s)**|609|618|620|614|**142**| The results show that our method consistently outperforms other baselines with exceptional efficiency on the linear GNNs. [1] Group Property Inference Attacks Against Graph Neural Networks [2] Adversarial learning techniques for security and privacy preservation: A comprehensive review [3] Certified edge unlearning for graph neural networks [4] Gif: A general graph unlearning strategy via influence function [5] Unlearning graph classifiers with limited data resources [6] Efficient model updates for approximate unlearning of graph-structured data [7] Understanding Black-box Predictions via Influence Functions --- Rebuttal 2: Comment: Dear Reviewer, Thank you for your review and comments. In our rebuttal, we made special efforts to clarify the distinctions between our approach and existing graph unlearning methods, provided additional discussions on our attack algorithm, and enhanced our technical details. We would greatly appreciate any feedback on any existing or new points we may not have covered, and we would be glad to address or discuss them further. Best regards, Authors --- Rebuttal 3: Comment: I thank the authors for their further clarification which helps strengthen the contributions of this work. However, I still think the technical novelty of the model approximation part (which also means the theoretical contribution) is somewhat limited. The mentioned simultaneous removal of nodes, edges, and features and removal for various GNN architectures seem also to be covered by existing studies [1]. Further comparison and clarification would be helpful. Based on the current response, I am willing to improve my score to 4. [1] Dong, Yushun, et al. "IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks." KDD 2024. --- Rebuttal 4: Comment: Thank you for bringing this work to our attention. We note that this work was published online in July of this year, which was subsequent to the submission of our paper. Therefore, we did not discuss it in our initial version. After a thorough review of this paper, we have identified two key aspects where our approach differs significantly: (1) The approach in [1] relies on additional theoretical assumptions that impact the efficiency of model approximation, making it less suitable for our efficiency requirements. Specifically, they define different affected node groups based on the type of unlearning requests (e.g., node removal, edge removal), with the assumption that these groups do not overlap. When removing a mix of nodes and edges, this assumption may fail. To address this, they split the removal into multiple sets, each meeting the non-overlapping requirement, and perform the unlearning processes on these sets sequentially. This approach incurs significantly higher computational costs; for instance, if the removal is divided into five sets, five separate unlearning processes are required sequentially. In contrast, our approach groups nodes simply into influenced set and removed set, requiring only one unlearning process per removal, which ensures greater efficiency. (2) We employ a different error estimation approach. Specifically, the error estimation proposed in [1] supports their certified unlearning proof, which involves the more complex computation of the inverse Hessian. This could significantly compromise the speed of selecting removals in our attack. In comparison, our error estimation can be computed directly based on the removal and graph structure, allowing for a more efficient attack. [1] Dong, Yushun, et al. "IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks." KDD 2024. --- Rebuttal Comment 4.1: Comment: I thank the authors for their further response. I agree with the first point about the shortcomings of the IDEA method, but I don't get the difference between the two methods in the inverse Hessian computation step. In addition, the computation of the inverse Hessian is not claimed to be an original contribution by this work. Nevertheless, it is true that IDEA was released recently and can be seen as a concurrent work, so it is not a big deal. I suggest the authors add more analyses and discussions on the efficiency of the proposed model approximation part. In comparison, Theorem 2 is not quite informative here as the advantage of the proposed model approximation compared with previous graph unlearning methods is its efficiency. If more details are added, I would like to further raise my score. --- Rebuttal 5: Comment: We apologize for any confusion caused. We want to clarify that the proposed error estimation does not accelerate the model approximation itself but rather speeds up the overall attack process. Specifically, in our attack framework, we need to select augmented graphs that incur relatively lower approximation errors, enabling a more effective attack (as shown in our ablation study). The efficiency we refer to is in this selection step. Since we need to measure the error for each augmented graph, the error measurement should be straightforward and easy to compute. Therefore, we introduce a new error bound in Theorem 2, which supports the removal of both nodes and edges. This bound is then used as an efficient error estimation to select augmented graphs. Upon careful review, we found that the IDEA method also derived a similar error bound. However, it involves the term of the inverse Hessian (Eq. (16) of [1]), which is more complex and cannot be easily computed for all augmented graphs, making it unsuitable for our selection step. In summary, our goal is not to achieve faster model approximation, but rather to use our error bound to help efficiently select data, which in turn makes the overall attack process faster. [1] Dong, Yushun, et al. "IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks." KDD 2024. --- Rebuttal Comment 5.1: Comment: I thank the authors for providing further clarifications. Most of my concerns are addressed. It would be great if the authors could add the discussions in the rebuttal phase to the revised paper. --- Rebuttal 6: Comment: We are happy to hear that our response was helpful. Thank you for your prompt feedback and recognition. We will revise our paper according to your suggestions.
Summary: This paper proposes a more efficient property inference attack on graph neural networks (GNNs). Particularly, it uses model approximation methods to reduce the number of shadow models that needs to be trained by the graph property inference attack. Here only a limited number of shadow models are trained from scratch while model approximation method is used to generate other approximated shadow models without training them. Theoretical upper bounds for the approximation error are also provided. Strengths: 1. The paper is overall clearly written and easy to follow. 2. The research problem of making the inference attacks more efficient is relevant. 3. The proposed method achieves the SoTA attack performance and largely reduces the training time of the attack model. Weaknesses: 1. How scalable is this attack for larger datasets? Current experiments are limited. Please perform the attack on larger datasets from the OGBN dataset (ArXiv, Products) 2. How well does this scale for other GNNs? Please evaluate with other GNN models. 3. Fig. 1 is confusing. Please update it to clearly show the difference between the proposed approach and the existing attacks. Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the weaknesses. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Sufficiently addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1: Experiments on larger datasets. As noted in Appendix A.5 and Table 8, we have already included experiments on a million-level dataset Pokec-100M, which contains 1,027,956 nodes and 27,718,416 edges. We found that the best baseline, PEPIA-DS, incurs a significant cost, while our method is 10.2× faster and achieves both higher accuracy and ROC-AUC. To ensure that the results highlighting scalability are emphasized, we have restructured our paper to mention this experiment at the beginning of the experiments section. Additionally, as you suggested, we further evaluate our approach on the OGBN products dataset (2,449,029 nodes), larger than Pokec-100M. We categorize products into consumer goods and non-consumer goods based on label descriptions. The property is set as the portion of non-consumer goods in the target graphs: 35\% (original) or 65\% (higher); the rest settings remain consistent with our manuscript. As shown in the table below, our attack achieves superior performance compared to the best baseline, with speeds of 12.0× faster, underscoring its scalability. | **Method** | **Accuracy** | **ROC-AUC** | **Runtime(s)** | |------------|--------------|-------------|----------------| | PEPIA-DS | 92.3 | 88.5 | 34881 | | Ours | **93.2** | **89.1** | **2918** | >W2: Experiments on other GNNs. As indicated in Appendix A.5 and demonstrated in Figure 3, we have already expanded our experimental scope to include attacks on other GNNs, including GCN and GAT. To highlight these results, we have restructured our paper to mention this experiment at the beginning of the experiments section. Additionally, to show the significance on other types of GNNs, we further evaluate with SGC [1], setting the number of hops to 2. The table below shows the performance and runtime of the attack on Facebook's node property, using SGC as target models. The results demonstrate our method consistently demonstrates superior performance and exceptional efficiency on these widely adopted GNNs. | | **GPIA** | **PEPIA-DS** | **PEPIA-S** | **AIA** | **Ours** | |------------|----------|--------------|-------------|---------|----------| | **Accuracy** | 58.7 | 60.3 | 61.0 | 59.3 | **62.5** | | **Runtime(s)** | 609 | 618 | 620 | 614 | **142** | >W3: Fig.1 is confusing. Thank you for your valuable comment. We have revised Fig.1 to clearly show the difference between our method and the existing attacks. Please refer to Fig. 1 in our uploaded rebuttal PDF. We will incorporate this new figure into our paper. [1] Simplifying Graph Convolutional Networks --- Rebuttal 2: Comment: Dear Reviewer, Thank you for your review and comments. We hope our rebuttal and additional evaluations have effectively addressed your primary concerns regarding our paper. We would greatly appreciate any feedback on any existing or new points we may not have covered, and we would be glad to address or discuss them further. Best regards, Authors --- Rebuttal 3: Comment: I thank the authors for their rebuttal. I am increasing my score to 5. --- Rebuttal 4: Comment: We are happy to hear that our response was helpful. Thank you for your prompt feedback and recognition.
null
null
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time and effort in reviewing our paper. As suggested by Reviewer j8ts, the following PDF contains our revised figure to better illustrate the differences between our method and existing attacks. Pdf: /pdf/dcc2cdb43cfa1a054b95a5f1cc35c81109c73b1a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections
Accept (poster)
Summary: The authors propose a fairness attack on GNN through node injection. They propose two node injection principles, the uncertainty-maximization and homophily-increase principle, to make fake node injections lead to a more significant fairness compromise. Strengths: This article is well-written and highly readable. The focus on attacking fairness is interesting. The author’s discussion on potential extensions of the method, such as different approaches to measure node vulnerability and potential mitigation methods, is encouraging. The choice of datasets and baselines for validating attack performance is representative, demonstrating the significant effectiveness of the proposed method. Weaknesses: 1. The main concern is that the proposed node-injection-based fairness attack could potentially be mitigated by existing defenses designed for accuracy-based node-injection attacks. The distinction between fairness-targeted and accuracy-targeted attacks is not discussed, and there is a lack of an in-depth discussion on related work concerning node-injection attacks aimed at accuracy. 2. Another concern is that the theoretical effectiveness of "the node injection strategy is evaluated by an increase in the node-level homophily ratio". This homophily ratio does not clearly establish a connection with fairness metrics, i.e., DP/EO. Providing a theoretical guarantee that the proposed node injection strategy results in more significant improvements in DP/EO compared to random node injection will enhance the validity. 3. Additionally, the motivation of attackers to undermine fairness is not clearly discussed. Adding some real-world examples to demonstrate the motivation of such attacks will be better. Technical Quality: 3 Clarity: 3 Questions for Authors: In addition to the weaknesses listed above, my questions for the authors include: 1. The injected node features require the attacker to perform local training on a given training node set. Can the attacker obtain the training node set, especially when it contains private information? 2. In the discussion of mitigation methods, such as Reliable Training Nodes and Strengthening Connections Among Groups, the authors focus on general mitigation strategies. These strategies are also suitable for defending against accuracy-targeted node-injection attacks and lack a discussion on the specifics of fairness-targeted attacks. This raises the question of whether existing defenses against accuracy-targeted attacks are sufficient to defend against fairness-targeted attacks. 3. The experimental results in Table 2 show that fairness is compromised while accuracy also decreases, which contradicts the expected trade-off between fairness and accuracy. The authors should explain the reason for this occurrence. In principle, when fairness is compromised with the goal of minimizing accuracy loss, accuracy should improve. As the topic is interesting, I am willing to adjust my score based on the authors' responses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. The proposed method is designed for binary classification with two sensitive attributes. It should be considered whether and how the proposed method can be extended to multi-class classification and multiple sensitive attributes. 2. The experiments were conducted on a two-layer GCN, which is consistent with the baseline [10]. However, a discussion is expected about the feasibility of the attack on GCNs with different architectures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Concerns about the discussion about node-injection-based attack (W1)]** The main distinction between fairness-targeted and accuracy-targeted attacks is the different attack objectives. The accuracy-targeted attacks aim to undermine the model accuracy, while fairness-targeted attacks aim to deteriorate the model fairness without significantly compromising the model accuracy. The results in Table 3 in our manuscript also support this claim. Although we introduce several node-injection-based attacks in Section 2, we agree that an in-depth review of corresponding related work would be better, and we will add more discussion on node-injection-based attacks to our manuscript in the future. ------ **[Concerns about the defense strategies (W1, Q2)]** Thanks for your insightful comments! We hope to address your concerns on the defense strategies with the following responses: 1. While selecting more reliable training nodes might help resist accuracy-targeted adversarial attacks, our experiments in Figure 2 actually show that this approach **does not completely mitigate** the fairness attacks posed by NIFA. More effective methods are still in demand. 2. We further test the defense capabilities of two classic GNN defense models—GNNGuard [4] and ElasticGNN [5]—against NIFA. The results are shown **in Table R5 in the uploaded PDF**. As shown in the results, **both defense models only maintain the utility performance and failed to fully eliminate the impact of fairness attacks.** ------ **[Concerns about the Lemma 1 (W2)]** As we claimed in footnote 2, the relationship between homophily ratio and fairness is widely discussed in previous work. To better understand the role of homophily ratio in GNN fairness and mitigate the gap, we would like to provide more theoretical analysis. **Due to space limitation, we place the complete theoretical analysis in the global rebuttal box**. ------ **[Concerns about the motivation (W3)]** In fact, fairness attacks on GNNs have numerous potential applications. Besides the example of GNN-based recommendation in the introduction, professional social networks like LinkedIn also face some potential risks. Specifically, when GNNs are used to identify high-potential candidates, attackers might use fake accounts to manipulate the model into predicting high potential for their group with a much higher probability, securing better job offers while harming other demographic groups. ------ **[Concerns about the capability of attackers (Q1)]** As we introduced in Appendix B, NIFA is under the gray-box attack settings, which is realistic and widely studied in previous utility attacks of GNNs [1-3]. In gray-box attack settings, the attackers have access to the training data, including graph, node attributes and training labels, but cannot see the model architecture and parameters. In fact, the training data including node attributes and ground truth labels are not hard to get in the real world. For example, some user attributes like gender, region are actually public on some social platforms like Weibo, LinkedIn or Twitter. ------ **[Concerns about the the accuracy (Q3)]** Although many studies in pursuing GNN fairness suggest a trade-off between utility and fairness—where better fairness often results in lower utility—we believe this trade-off does not fully hold in the context of fairness attacks. For instance, if a GNN model's predictions are entirely aligned with sensitive attributes, resulting in a 100% SP (Statistical Parity), its accuracy would inevitably suffer. Additionally, we would like to further discuss some **potential methods to better alleviate the utility decrease introduced by NIFA**, such as controlling the number of target nodes, i.e. the nodes with the top $k$ highest uncertainty. Intuitively, with more nodes connected with injected nodes, the utility will be more likely to be influenced. To support our claims, we tune the $k$ in a range of {0.10, 0.25, 0.50, 0.75} in DBLP, and the attack performance on GCN and GraphSAGE is shown **in Table R1 in the uploaded PDF**. It can be seen that, after decreasing $k$, the utility after the attack can be better preserved. ------ **[Concerns about the expandability (L1)]** Thanks for your inspiring question, and we would like to discuss the expandability of NIFA from the following two perspectives: - **Multi-class classification**: In fact, our method can naturally fit the multi-label classification scenarios, since there is no specific requirement on the number of categories in our framework, which can also be verified by the problem definitions in Section 3. Since our datasets are collected from the prior work -- FA-GNN, we mainly focus on binary classification tasks to be consistent. - **Multiple sensitive attributes**: As we claimed in Section 3, NIFA can be easily extended to multiple sensitive attributes. The only distinction is that the attackers need to equally partition the injected nodes into multiple groups instead of two groups during the node injection process, and make sure each injected node will only connect to real nodes with the same sensitive attribute. ------ **[Concerns about the GCN architectures (L2)]** Thanks for your suggestions. We further conduct model architecture analysis for GCN by modifying the model layer and hidden dimension, and the attack results on Pokec-z and Pokec-n are shown **in Table R6 in the uploaded PDF**. It can be seen that, NIFA demonstrates promising robustness towards the GCN models with different model layers and hidden dimensions. **References** [1] Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR 2019. [2] A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning, NeurIPS 2019. [3] Adversarial Attacks on Fairness of Graph Neural Networks, ICLR 2024. [4] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks, NeurIPS 2020. [5] Elastic Graph Neural Networks, ICML 2021. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response and clarifications. At this point, I have no further questions. Although the current work still relies on a gray-box attack model and lacks an effective defense mechanism against fairness attacks, I acknowledge these are existing challenges and appreciate the authors' clarification of these limitations. I increased my score based on new experiments and explanations of convincing motivation, better robustness compared to accuracy attacks, theoretically supported proxy metric, and generalizability. --- Reply to Comment 1.1.1: Title: Thank you for the comments Comment: Thank you again for your valuable review and reconsideration of scores!
Summary: The authors propose a Fairness GNN Attack method called Node Injection-based Fairness Attack (NIFA). The proposed method aims to increase the bias of GNN models by injecting nodes into the graph. NIFA identifies nodes with high uncertainty to target them, then connects the injected nodes in such a way increasing the homophily of the graph. The features of the injected nodes are then optimized to balance utility with fairness. NIFA is evaluated on multiple benchmark datasets and compared with Fairness GNN attack methods. Strengths: - The paper is very well written and easy to follow. - The proposed method seems to be technically sound. - The fairness of GNN models and Fairness attack methods are both important and well motivated problems. Weaknesses: - The proposed attack method is not well motivated, could the authors provide concrete application domains/cases where it is not possible to modify/attack the existing edges between nodes in the graph but it is possible to inject nodes into it and connect such nodes to the rest of the graph? - Furthermore, if modifying the edges in graph is considered unrealistic, how is this different from the part in the proposed method where injected nodes are connected to the nodes in the graph ? The limitation of previous works claimed by the authors seem rather arbitrary and inconsistent. If the ability to modify the graph structure is not a realistic assumption, then connecting the injected nodes to the real nodes in the graph should similarly be considered unrealistic. Basically, if connecting injected nodes to the real nodes in the graph is permissible, then it should be permissible for previous fairness attack methods to modify the graph structure by adding edges to it (without deleting existing edges). - The utilized 1% perturbation rate is rather high, in real-world large graph datasets consisting of millions of nodes, this is equivalent to injecting the graph with tens of thousands of nodes which can hardly be considered unnoticeable. The authors should experiment with significantly smaller perturbation rates and compare the corresponding results against the relevant baselines in the literature. - It is not possible to evaluate the effectiveness of proposed method against the relevant fairness attack methods using a single dataset only as 2 out of the 3 fairness attack methods are reported without results on 2 datasets. The authors should include additional datasets and/or additional Fairness attack methods. - All 3 fairness GNN Attack methods [11, 13, 40] report results on one or both Pokec datasets. Therefore, if the author do not have access to the computational resources required to run the aforementioned baselines, they should run their proposed method on the setups of those 3 baselines and report the results of the 3 baselines [11, 13, 40] from the corresponding works. In this manner, we would be able to properly evaluate the effectiveness of the proposed method against the relevant Fairness Attack baselines in the literature. - It is unclear how the authors ensure a fair budget across all attack methods given the different nature of the attacks where some methods modify the graph structure and node features, while other methods inject the graph with additional nodes. Could the authors please elaborate on this point and how they ensured fairness across budgets of different types of attack methods ? - The train/val/test split utilized assigns half of the nodes to labeled training nodes. However, in most realistic scenarios a significantly smaller percentage of the nodes are assigned to labeled training nodes. How does the proposed method perform when the number of training nodes in the graph is limited ? This is specially important given that targeted nodes with high uncertainty in this work are a subset of the training nodes. The authors should conduct experiments with limited labeled nodes and evaluate the effectiveness of the proposed method under this more realistic scenario. - The authors should evaluate their proposed method on additional common benchmark datasets for Fair GNN learning task such as Credit, Bail and NBA datasets. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the Weaknesses section. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: The authors adequately discuss the limitations of their proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for these insightful comments. Specifically, we aim to address the concerns of the reviewer with the following responses. ------ **[Concerns about the attack scenarios (W1, W2)]** Take a social graph like Twitter as an example, where each node denotes a user and each link represents a following relationship. In this case, modifying the edges between existing nodes means alternating the following relationship between real users. **Such operations usually ask the attackers to hack the user accounts, which is hard and time-consuming.** However, with the node-injection-based attack, the attackers only need to **create several zombie accounts (node-injection) and follow several real users**, which is much easier. In fact, compared with attacks modifying existing edges, **the superiority of node-injection-based attacks has been widely verified in multiple prior works [1-4]**. However, all these works only focus on attacking GNNs utility, while neglecting the fairness vulnerability of GNNs. ------ **[Concerns about the perturbation ratio (W3)]** 1. It seems that there are some misunderstanding about the perturbation ratio. In our work, the perturbation ratio **is based on the labeled nodes** in the graph instead of all nodes in the graph. Specifically, as we introduced in Appendix G, the injected nodes for Pokec-z, Poker-n and DBLP are only 102, 87 and 32 respectively, which is around **0.15%**, **0.13%** and **0.16%** of all nodes in the original graph. 2. In fact, the proportion of node injections in our work is comparable to or even lower than that of other related node-injection-based attacks. To support our claim, we statistics the default node injection ratios (relative to the total number of nodes in the graph) of several node-injection-based attacks below: | | NIPA[1] | TDGIA[2] | MaxiMal[4] | Ours | | :-----------------------: | :-----: | :---------: | :--------: | :---------: | | **Node injection ratios** | 1% | 0.07%-0.30% | 1% | 0.13%-0.16% | 3. **We also conduct experiments by decreasing the perturbation ratio to 0.08%** (relative to the total number of nodes) on DBLP, i.e. 16 injected nodes. The attack performance on GCN is shown **in Table R2 in the uploaded PDF.** It can be seen that, even with a more limited perturbation rate, NIFA still achieves the best fairness attack performance compared with other baselines. ------ **[Concerns about the baselines & datasets (W4, W5, W8)]** - **Baselines:** Thanks for your constructive feedback. To the best of our knowledge, **FA-GNN, FATE and G-FairAttack are the only three attack methods on GNN fairness.** We would be more than happy to provide additional comparisons if the reviewer can clarify other missing baselines on GNN fairness attacks. - **Datasets:** In fact, our datasets are consistent with FA-GNN, the first fairness attacks on GNNs. We agree that comparisons on more datasets could better verify the effectiveness of NIFA. In detail, we further conduct experiments on the setup of FATE and G-FairAttack, i.e. Pokec datasets with fewer nodes. We also examine the effectiveness of NIFA on the German benchmark, which is widely used in previous GNN fairness studies [5-7]. The experimental results are shown **in Table R4 in the uploaded PDF**, where NIFA still achieves competitive fairness attack performance. ------ **[Concerns about the fair budget (W6)]** For methods that only inject new nodes, such as AFGSM, TDGIA and G2A2C, we require the number of injected nodes and average degree of injected nodes to be the same as ours. For methods that require to modify the graph structure, such as FA-GNN, FATE and G-FairAttack, **we set the number of modified edges to be the same with our injected edges**, i.e. the added edges between injected nodes and original nodes. We will enhance the clarity of this part in Appendix G in the future. ------ **[Concerns about the label ratio (W7)]** Thanks for your comments! In fact, our datasets are collected from the previous work -- FA-GNN, and our train/val/test ratios are set to be consistent with its settings. We agree that there might be fewer labeled nodes in more realistic scenarios. To evaluate the effectiveness of NIFA under such settings, we decrease the training ratio from 50% to 25%, 10% and 5%, respectively. The attack performance of NIFA is shown **in Table R3 in the uploaded PDF**. It can be seen that, even with much fewer labeled training nodes, NIFA still consistently demonstrates promising attack performance. ------ We sincerely thank the reviewer for the thoughtful comments and constructive feedback. We sincerely hope that our responses can effectively address your concerns and contribute to a better version of our research. If you have any further questions or confusion, please do not hesitate to reach out to us. We would be more than willing to assist and provide further clarification. ------ **References** [1] Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach, WWW 2020. [2] TDGIA: Effective Injection Attacks on Graph Neural Networks, KDD 2021. [3] Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning, AAAI 2023. [4] Maximizing Malicious Influence in Node Injection Attack, WSDM 2024. [5] EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks, WWW 2022. [6] Improving Fairness in Graph Neural Networks via Mitigating Sensitive Attribute Leakage, KDD 2022. [7] FairSIN: Achieving Fairness in Graph Neural Networks through Sensitive Information Neutralization, AAAI 2024. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. After carefully reviewing it, I have updated my original score. --- Rebuttal 2: Title: Thank you for the reply Comment: Thanks for your valuable feedback and reconsideration of scores!
Summary: This paper examines the vulnerability of GNN fairness under adversarial attacks. A gray-box node injection-based poisoning attack method, namely NIFA, is proposed. NIFA follows the newly designed uncertainty maximization principle and homophily-increase principle. Then, multiple novel objective functions are proposed to guide the optimization of the injected nodes’ features, impacting the victim GNN’s fairness from a feature perspective. The experiment is extensive and solid. Strengths: S1: The problem of the vulnerability of GNN fairness is interesting and very important. This paper is well-motivated. S2: The proposed method is technically sound. S3: The experiment is solid and extensive. Very comprehensive experimental results are reported in the appendix including hyper-parameter testing, ablation studies, analysis of poisoned graph, etc. S4: The paper is well-written and very easy to follow. Weaknesses: W1: Node injection will change the topological structure of a graph. However, several existing studies work on structural fairness in GNN. I am wondering whether the proposed fairness attacks are applicable to those works. What if the graph structure changes over time and becomes dynamic? Is the proposed attack applicable to dynamic fairness GNN methods? Some references: [1] Uncovering the Structural Fairness in Graph Contrastive Learning. NeurPIS 2022. [2] Tail-GNN: Tail-Node Graph Neural Networks. KDD. 2021. [3] On Generalized Degree Fairness in Graph Neural Networks. AAAI. 2023. [4] Toward Structure Fairness in Dynamic Graph Embedding: A Trend-aware Dual Debiasing Approach. KDD. 2024. Technical Quality: 3 Clarity: 4 Questions for Authors: Please see the weakness. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for these insightful comments. Specifically, we aim to address the concerns of the reviewer with the following responses. ------ **[Question about the Structural Fairness (W1)]** Thanks for your inspiring question! According to prior research, structural fairness and attribute fairness stem from different reasons [3]. The structural fairness (such as degree fairness in TailGNN [2] and DegFairGNN [3]) may be caused by the limited neighborhood information, while attribute fairness mainly comes from the "homophily principle" in GNN message propagation and the correlation between sensitive attributes and other attributes. In this way, the rationale behind NIFA such as the "homophily-increase principle" may be more suitable for attribute fairness instead of structural fairness. However, we highly agree that the adversarial attacks on structural fairness would be another inspiring research direction in the future, and we would like to add this to our discussion in the next version. ------ **[Question about the Dynamic Fairness (W1)]** To the best of our knowledge, the dynamic fairness in GNN is still under-explored, and the only related work is based on structural fairness [1], which is not suitable for NIFA as we discussed before. We extend our best gratitude for your insightful feedback, and we believe that more efforts are still in demand for a more detailed definition of dynamic fairness based on the sensitive attributes before launching a corresponding fairness attack. ------ We sincerely thank the reviewer for the encouraging comments and thoughtful feedback. We sincerely hope that our responses can effectively address your concerns and contribute to a better version of our research. If you have any further questions or confusion, please do not hesitate to reach out to us. We would be more than willing to assist and provide further clarification. **Reference** [1] Toward Structure Fairness in Dynamic Graph Embedding: A Trend-aware Dual Debiasing Approach. KDD. 2024. [2] Tail-GNN: Tail-Node Graph Neural Networks. KDD. 2021. [3] On Generalized Degree Fairness in Graph Neural Networks. AAAI. 2023. --- Rebuttal Comment 1.1: Comment: Thank you very much for your responses which address my concerns well. I will keep my score.
Summary: This paper proposes NIFA, a novel fairness attack method via node injection. In particular, the authors use the uncertainty maximization principle to select the target node to attack and randomly connect the injected nodes to the target nodes in the same sensitive group to increase the overall homophily. Finally, the authors use direct optimization to tune the features of the injected nodes to maximize the unfairness and minimize the predictive loss. Experimental results show the effectiveness of NIFA in reducing the fairness of GNNs. Strengths: - This paper delivers the first node injection attacks on fairness for GNNs. - The paper is overall well-organized and easy to follow. - The methodology is basically well-motivated and verified by empirical studies. Weaknesses: - Although experiments show that NIFA distinctly increases the bias level of GNNs, the accuracy decreases as well, especially for DBLP. It can be helpful if more discussions on the tradeoff between accuracy and bias level are provided. - Complexity analysis is not included in the paper. A succinct discussion (theoretical or empirical) on the computational efficiency of NIFA is encouraged. - There exists a gap between Lemma 1 and the goal of the homophily-increase principle. Increased homophily cannot guarantee the increase of $\Delta_{sp}$ and $\Delta_{eo}$. - The overall framework of NIFA is simple but kind of incremental. The technical novelty and provided insights are somewhat limited. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations including the potential negative societal impact of this work are sufficiently discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive comments, and we aim to address the concerns with the following responses. ------ **[Concerns about the utility (W1)]** We will first analyze the potential reasons for utility decrease, and then provide some solutions for balancing the trade-off between utility and fairness attack. - **Potential reasons:** Compared with evasion attacks, where only input data is modified and the victim model remains unchanged, poisoning attacks naturally result in a larger utility change due to the change of victim model. - **Mitigation solutions:** There are some potential methods to control the trade-off between utility and fairness of NIFA, **such as controlling the number of target nodes**, i.e. the nodes with the top $k$ highest uncertainty. Intuitively, with more nodes connected with injected nodes, the utility will be more likely to be influenced. To support our claims, we tune the $k$ in a range of {0.10, 0.25, 0.50, 0.75} in DBLP, and the attack performance on GCN and GraphSAGE is shown **in Table R1 in the uploaded PDF**. It can be seen that, after decreasing $k$, the utility after the attack can be effectively preserved. ------ **[Concerns about the complexity (W2)]** We have included an efficiency analysis for NIFA **in Appendix H.5**. The results indicate that NIFA demonstrates much higher efficiency compared with other fairness attacks on GNNs. ------ **[Concerns about the Lemma 1 (W3)]** As we claimed in footnote 2, the relationship between homophily ratio and fairness is widely discussed in previous work. For a better understanding of the role of homophily ratio in GNN fairness and to mitigate the gap, we provide the following theoretical analysis: - **Theoritical analysis:** Denote $P(s|x)$ as a predictor that estimates the sensitive attributes $s$ given node features $x$. Inspiring by [1], we utilize a linear intensity function $\mathcal{D} _{\theta}(s|x)$ with parameter $\theta$ to define its predictive capability, where $\mathcal{D} _{\theta}(s|x) \sim \mathcal{N}(\mu, \sigma^2)$ and $\mathcal{D} _{\theta}(\overline{s}|x) \sim \mathcal{N}(\overline{\mu}, \sigma^2)$, where $\overline{s}$ is the false sensitive attribute and $\mathcal{N}(\cdot, \cdot)$ is the Gausssian distribution. In this way, $\mu > \overline{\mu}$ indicates that $\mathcal{D} _{\theta}$ prefers to provide a higher intensity score for the true sensitive attribute given the node embeddings $x$, and the larger $\mu-\overline{\mu}$ is, the stronger inference capability of $\mathcal{D} _{\theta}$ and more biased node embeddings $x$ are. ​ We further formulate the message propagation process in GNN as $x_i^{\prime}=x _i+x _i^{neigh}$, where $x _i^{neigh}$ denotes the average neighbor embeddings for node $i$, and $x _i^{neigh}=P _i^{same}x _i^{same}+P _i^{diff}x _i^{diff}$. $P _i^{same}$ denotes the probability of selecting a neighbor with the same sensitive attribute with node $i$, i.e. homophily ratio, and $P _i^{same} + P _i^{diff}=1$. $x _i^{same}$ denotes the average neighbor embeddings with the same sensitive attribute with node $i$. ​ In this way, we can have the following derivation of the equations: $$ \\begin{align*} \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i') - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i') \\right\\} &= \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i) + \\mathcal{D} _{\\theta}(s _i \\mid x _i^{\\text{neigh}}) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i^{\\text{neigh}}) \\right\\} \\\\ &= \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i) + P_i^{\\text{same}} \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i^{\\text{same}}) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i^{\\text{same}}) \\right\\} \\right. \\\\ &\\quad \\left. - P _i^{\\text{diff}} \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i^{\\text{diff}}) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i^{\\text{diff}}) \\right\\} \\right\\} \\\\ &= \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i) + \\left( P _i^{\\text{same}} - P _i^{\\text{diff}} \\right) (\\mu - \\overline{\\mu}) \\right\\} \\\\ \\end{align*} $$ ​ It can be seen that, the second term $\\left( P _i^{\\text{same}} - P _i^{\\text{diff}} \\right) (\\mu - \\overline{\\mu})$ $= ( 2*P _i^{\\text{same}}- 1) (\\mu - \\overline{\\mu})$ **is positively correlated to the homophily ratio**, and with the increase of homophily ratio as we introduced in Lemma1, the message propagation process will result in larger unfairness. ------ **[Concerns about the contributions (W4)]** The major contributions of this paper are three-fold: 1. We are the first to launch a node-injection-based fairness attack on GNNs to the best of our knowledge, and highlight the vulnerability of GNN fairness. 2. We design a simple yet effective method NIFA, which consists of two novel and insightful principles for guiding node injection operations. Extensive experiments verify the effectiveness, deceptiveness and efficiency of NIFA, which can effectively deteriorate GNN fairness, even including fair GNNs with merely 1% injected nodes. 3. From the perspective of responsible AI, we summarize several key insights from the success of NIFA, which we believe can inspire more in-depth research on robust GNN fairness. ------ We sincerely hope that our responses can effectively address your concerns and contribute to a better version of our research. If you have any further questions or confusion, please do not hesitate to reach out to us. We would be more than willing to assist and provide further clarification. **Reference** [1] FairSIN: Achieving Fairness in Graph Neural Networks through Sensitive Information Neutralization, AAAI 2024. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing a detailed response. The provided analysis of the connection between homophily ratio and bias seems to rely on strong assumptions, but it would be fine if this has been verified empirically by previous works. Most of my concerns are well addressed. Hence, I am happy to raise my score. --- Reply to Comment 1.1.1: Comment: Thanks again for your valuable comments and reconsideration of scores! To better understand the relationship between the homophily ratio and fairness, we would like to provide a more thorough discussion and summarization of previous related work in the final version.
Rebuttal 1: Rebuttal: ## To all reviewers: We sincerely thank all reviewers for their valuable feedback and encouraging comments on our paper. All four reviewers consistently approve of the topic: **“interesting”**, **“well-motivated”**, and **“encouraging”**, and the presentation of our work: **"well-written"**, **"very easy to follow"** and **"highly readable"**. Also, most of the reviewers agree that our experimental results:**“significant effectiveness”**, **“solid and extensive”**, and **“comprehensive”**. We also notice that there are some misunderstanding and concerns about our paper. **Due to space limitations, we have included most of the experimental results (figures and tables) in the attached PDF, with corresponding references provided in the rebuttal.** We sincerely hope that our responses have effectively addressed your concerns and contributed to a deeper understanding of our research. If you have additional questions or confusion, please feel free to contact us without hesitation. We would sincerely like to provide more information and clarification if necessary. ------ ## To reviewer #wXHX: Due to space limitations, we place the theoretical analysis for the relationship between homophily ratio and fairness here. **Theoritical analysis:** Denote $P(s|x)$ as a predictor that estimates the sensitive attributes $s$ given node features $x$. Inspiring by [1], we utilize a linear intensity function $\mathcal{D} _{\theta}(s|x)$ with parameter $\theta$ to define its predictive capability, where $\mathcal{D} _{\theta}(s|x) \sim \mathcal{N}(\mu, \sigma^2)$ and $\mathcal{D} _{\theta}(\overline{s}|x) \sim \mathcal{N}(\overline{\mu}, \sigma^2)$, where $\overline{s}$ is the false sensitive attribute and $\mathcal{N}(\cdot, \cdot)$ is the Gausssian distribution. In this way, $\mu > \overline{\mu}$ indicates that $\mathcal{D} _{\theta}$ prefers to provide a higher intensity score for the true sensitive attribute given the node embeddings $x$, and the larger $\mu-\overline{\mu}$ is, the stronger inference capability of $\mathcal{D} _{\theta}$ and more biased node embeddings $x$ are. We further formulate the message propagation process in GNN as $x_i^{\prime}=x _i+x _i^{neigh}$, where $x _i^{neigh}$ denotes the average neighbor embeddings for node $i$, and $x _i^{neigh}=P _i^{same}x _i^{same}+P _i^{diff}x _i^{diff}$. $P _i^{same}$ denotes the probability of selecting a neighbor with the same sensitive attribute with node $i$, i.e. homophily ratio, and $P _i^{same} + P _i^{diff}=1$. $x _i^{same}$ denotes the average neighbor embeddings with the same sensitive attribute with node $i$. In this way, we can have the following derivation of the equations: $$ \\begin{align*} \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i') - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i') \\right\\} &= \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i) + \\mathcal{D} _{\\theta}(s _i \\mid x _i^{\\text{neigh}}) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i^{\\text{neigh}}) \\right\\} \\\\ &= \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i) + P_i^{\\text{same}} \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i^{\\text{same}}) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i^{\\text{same}}) \\right\\} \\right. \\\\ &\\quad \\left. - P _i^{\\text{diff}} \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i^{\\text{diff}}) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i^{\\text{diff}}) \\right\\} \\right\\} \\\\ &= \\mathbb{E}\\left\\{ \\mathcal{D} _{\\theta}(s _i \\mid x _i) - \\mathcal{D} _{\\theta}(\\overline{s _i} \\mid x _i) + \\left( P _i^{\\text{same}} - P _i^{\\text{diff}} \\right) (\\mu - \\overline{\\mu}) \\right\\} \\\\ \\end{align*} $$ It can be seen that, the second term $\\left( P _i^{\\text{same}} - P _i^{\\text{diff}} \\right) (\\mu - \\overline{\\mu})$ $= ( 2*P _i^{\\text{same}}- 1) (\\mu - \\overline{\\mu})$ is positively correlated to the homophily ratio, and with the increase of homophily ratio as we introduced in Lemma1, the message propagation process will result in larger unfairness. **Reference** [1] FairSIN: Achieving Fairness in Graph Neural Networks through Sensitive Information Neutralization, AAAI 2024. Pdf: /pdf/d96173317284a1440da464543553d7abc218ca76.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Chain of Agents: Large Language Models Collaborating on Long-Context Tasks
Accept (poster)
Summary: This paper proposes a new method, "Chain-of-Agents" (CoA), to augment the long-context handling capabilities of large language models (LLMs). CoA is a framework designed to enhance the processing of long contexts by sequentially using multiple agents to handle different chunks of input text. In CoA, worker agents manage different segments sequentially and communicate their findings. These findings are then synthesized by a manager agent into a coherent final output, effectively aggregating information and reasoning across the entire context. The author conducted experiments on a wide range of long-context tasks to verify the advantages of CoA. Strengths: 1. The proposed Chain-of-Agents method is an interesting approach, and the authors have experimentally validated its effectiveness. 2. The writing and presentation of the article are great, making it easy to read and follow overall. Weaknesses: 1. My primary concern is the novelty of this submission. Breaking long texts into multiple chunks and processing them sequentially is a well-established practice in both long-content processing and generation. For example, in RecurrentGPT [1], the authors introduced a long-short memory mechanism to store intermediate states during processing, improving the quality of long content generated by LLMs. This is similar to the communication unit (CU) mechanism mentioned in this paper, but the authors do not sufficiently discuss this similarity. Additionally, previous work such as LongAgent [2] proposed segmenting the input long text into several chunks and assigning them to corresponding members. The authors lack sufficient discussion and experimental comparison on utilizing memory mechanisms or agent mechanisms for processing long information. 2. The baselines used in the author's experiments are relatively weak. While the related works section discusses some studies on long text processing, such as references [9] and [13], the implementation process only compares RAG, Vanilla, and COA. The author primarily compares their own designed merge and hierarchical methods in Agent and mechanisms for processing long texts. So, it is challenging to determine the advantages of the proposed methods over existing strong baselines in the field based on the current experiments. 3. Given the existing work on agents collaborating to process multiple chunks, the authors could enhance the novelty of their submission by delving deeper based on the discussion in Section 5.6. Specifically, they could explore the most effective methods of collaboration in multi-chunk processing and agent cooperation. This could add significant technical depth to the paper. [1] Zhou, Wangchunshu, et al. "Recurrentgpt: Interactive generation of (arbitrarily) long text." arXiv preprint arXiv:2305.13304 (2023). [2] Zhao, Jun, et al. "LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration." arXiv preprint arXiv:2402.11550 (2024). Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How is the chunk split? I did not find a clear description of how the chunk is processed. Could there be a risk of text being abruptly truncated in the middle? 2. As mentioned in the abstract, "input reduction has no guarantee of covering the part with needed information." I am wondering if sequential chunk processing will also encounter these challenges. Could the communication unit potentially drop necessary information if some information is only useful in the context of the following content? 3. In Appendix 1, regarding the calculation of time complexity in lines 592-595, if the model's processing length limit is k and no additional mechanisms are introduced, is the computational complexity of attention calculation O(k^2) or O(n^2)? Please educate me if I miss something important. 4. "a LLM" in line 104 -> "an LLM" Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable suggestions. We provide answers to all your weakness points and questions below. We hope these resolve your concerns. W1: **Novelty and other baselines**: While chunking the input into multiple segments seems intuitive, the novelty of CoA lies in the chain communication of multiple agents using natural language on inputs, which is not explored by previous studies. LongAgent [1] does not allow communication between workers, making it hard for each worker to understand what is happening at other agents, hurting the performance of tasks that need long dependency. Besides, the authors of LongAgent state that the decomposition of the question also is a limitation for LongAgents. The failure of decomposition multi-hop or complex questions would directly lead to wrong answers. Additionally, some of the structures of LongAgents are transferable to CoA, such as conflict resolution. In contrast, CoA concentrates on improving the long dependency reasoning (such as multi-hop question answering) and does not depend on the decomposition capability of the manager. Regarding RecurrentGPT [2], although it uses a chain structure to do long story generation, the task and motivation are different. They focus on memorizing the context to have a better plan to generate a longer story. In contrast, CoA focuses on the long-dependency and complex reasoning of the source text rather than output. We also compare the performance of CoA with RecurrentGPT and LongAgent, where COA outperforms LongAgent by 16% on NIAH tasks and outperforms RecurrentGPT by 20% on Hotpot QA . More results can be found in GR 2 and Figure 2 in PDF. The results are listed in the Table: | LongAgent | Accuracy | |-----------------|----------| | GPT-4 (128k) | 62.00 | | LongAgent | 81.53 | | Text-bison (8k) | 26.00 | | CoA (8k) | **97.80** | | HotpotQA | | |--------------|-------| | RecurrentGPT | 32.54 | | CoA (8k) | **53.62** | We will include these discussions and comparisons in our paper. W2: **Stronger baselines**: We believe the baselines used are already strong given the literature in this direction. We are open to benchmarking on any specific suggestions. We have considered other baselines that we modified as they were weaker. For example, WalkingMaze [5] also split the input into chunks; we found it did not work for our task (less than 40% on HotpotQA). Thus, we modify its structure and build a new baseline named Hierarchical which obtains a much stronger performance (50.62% on HotpotQA). Also, as stated in W1, LongAgents and RecurrentGPT performances are much lower than CoA and thus weaker than the multiagent baseline we have used. We have further added comparisons of CoA with the previous SOTA below, even including the ones requiring training (indicated with *). As can be seen, CoA achieves better or comparable results on all datasets, improving HotpotQA and RepoBench-p by a large margin. The performance on some datasets is lower than SOTA because the datasets are trained on some domain-specific models, such as Qasper | | HotpotQA | MuSique | Qasper | NarrativeQA | Quality | QMsum | GovReport | BookSum | RepoBench-P | |---------------|----------|----------|----------------|-------------|----------|------------|-----------|---------|-------------| | Previous Best | 54.4 [1] | 40.4 [1] | 53.9* [2] | 26 [1] | 89.2 [3] | 22.44* [2] | 26.3 [3] | 18.5*[4] | 56.47 [1] | | Ours Best | 62.04 | 42.49 | 38.01 | 25.26 | 83.8 | 17.67 | 26.98 | 17.47 | 73.05 | We will include WalkingMaze, LongAgents, RecurrentGPT, and other SOTA models in our final paper. Thank you for the great suggestions! W3: **More agent collaboration**: We aim to emphasize a simple yet effective framework with strong performance to create a higher impact. It is indeed helpful to include more analysis and another complex framework, we prefer to use this paper to demonstrate the potential of chain communication. As mentioned by Reviewer vGWD and FiRU, being simple and intuitive but effective are the first strengths of the proposed framework. As described, we have experimented with fairly complex agent collaboration, including bi-directional, permutation, hieratical (multi-layers of information processing), WalkMaze, RecurrentGPT, etc, and the current CoA framework performs the best. In the future, we want to explore more directions, such as high efficiency in communication, complex communication schema, etc. Q1 **Chunking**: Please refer to the Algorithm in pdf. While there is a small risk of chunking text in the middle, CoA is robust to chunking because the information of previous segments is sent to the next agent for processing, while other baselines such as LongAgents cannot communicate with siblings, losing the context if chunking in the middle. Q2 **Information loss**: This happens in CoA occasionally and is neglectable. We have added an analysis to further probe the information loss, as described in W3 of Reviewer FiRU. It shows that COA ensures final results yielding negligible information loss (only 1%-4%). Q3 **Time complexity**: It should be O(k^2) because auto-regressive LLMs such as GPT and LLaMA only pay attention to the left side of the input. Thus, each token generated will attend to $k$ tokens [6]. We will clarify this in our paper. [1] Llm maybe longlm: Self-extend llm context window without tuning. ICML 2024. [2] Scrolls: Standardized comparison over long language sequences. EMNLP 2022. [3] Zeroscrolls: A zero-shot benchmark for long text understanding. EMNLP Findings 2023 [4] RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization. NAACL 2024. [5] Walking down the memory maze: Beyond context limit through interactive reading. arXiv, 2023. [6] Improving language understanding by generative pre-training. arXiv, 2018. --- Rebuttal Comment 1.1: Comment: I appreciate the author's clear and point-by-point rebuttal, as well as the supplementary experiments provided. However, after thoroughly re-examining the paper, the rebuttal, the feedback from the other reviewer, and the related works in this area, I still have **significant concerns** regarding the novelty and the experimental validation of this work. First, as I mentioned in my initial review, the novelty of this submission remains my primary concern. I agree with reviewer *vGWD*'s comment that "*the technical depth of this work is limited.*" Although the authors compare LongAgent and RecurrentGPT in their general response, asserting that "LongAgent **does not allow communication between agents**, making it hard for each agent to understand what is happening with others," various communication mechanisms exist in multi-agent systems (see Section 3.3 in [1] and the vertical and horizontal multi-agent architecture definition in [2]). While CoA uses decentralized communication and LongAgent employs centralized communication, based on my understanding, the central agents in centralized communication essentially function as the communication units described in your paper. Additionally, you mention that LongAgent identifies the decomposition of the input question as a limitation. Could you clarify how CoA addresses this issue? From my understanding, CoA does not involve the decomposition of the input question. Furthermore, as the authors stated, "Regarding RecurrentGPT, although it uses a chain structure to perform long story generation, the task and motivation are different." However, this further diminishes the novelty of the proposed method. In my view, many components of the proposal are common practices in multi-agent system design and long-content generation/processing. The work appears to be a fine-tuned combination of existing methods rather than a deeply innovative approach. In my opinion, the limited novelty does not meet the standards of NeurIPS. Second, regarding the stronger baseline, the authors only tested LongAgent in the NIAH PLUS. Given that LongAgent is closely related to your work and was published three months before your submission, it should be comprehensively tested as a stronger baseline across all benchmarks you tested. Without this, it is difficult to claim that your communication unit or designed agent system is more efficient. Overall, I appreciate the effort the authors have put into the rebuttal. Unfortunately, it does not address my primary concern regarding the work's novelty. I remain open to further discussion with the authors in the coming days. References: [1] Guo, Taicheng, et al. "Large language model based multi-agents: A survey of progress and challenges." arXiv preprint arXiv:2402.01680 (2024). [2] Masterman, Tula, et al. "The landscape of emerging ai agent architectures for reasoning, planning, and tool calling: A survey." arXiv preprint arXiv:2404.11584 (2024). --- Rebuttal 2: Title: Reponses from Authors (Part 1) Comment: Thanks so much for carefully reading our responses, and providing such insightful questions and thoughts! Below, we first re-state our novelty aspects more clearly and then comment on the raised concerns one by one, hoping to make our claims clearer to you and address your concerns! CoA has a unique positioning in the multi-agent LLM literature. The table below illustrates the comparison of related work and their topologies (after carefully reading the survey [1], [2], and other literature). Broadly, prior work on multi-agent LLM systems for long input tasks has a centralized design, and existing decentralized designs do not extend effectively to long context tasks. To the best of our knowledge, CoA is the first to use a centralized structure on long input tasks and is more effective than baselines such as WalkMaze and RAG. | Work | Task | Type | Communication Schema | |---------------------------------|---------------------------|---------------|-------------------| | [Chen et al., 2023d] | Multi-robot planning | Decentralized | Circle | | RoCo [Mandi et al., 2023] | Multi-robot collaboration | Decentralized | Planning Pipeline | | CoELA [Zhang et al., 2023c] | Multi-Agents cooperation | Decentralized | Memory Module | | MAD [Du et al., 2023] | Improving Factuality | Decentralized | Debate | | Reconclie [Chen et al., 2023] | Reasoning | Decentralized | Round Table | | WalkMaze [Chen et al., 2023] | Long Input | Centralized | Tree Structure | | LongAgent [Zhao et al., 2024] | Long Input | Centralized | Tree Structure | | RecurentGPT [Zhou et al., 2024] | Long Output | Decentralized | Memory Gating | | CoA (Ours) | Long Input | Decentralized | Chain of Agents | Besides, our contribution is not to claim a new topology for communication. The challenge of long dependency on long input tasks is well-known and remains under-explored (as illustrated with the sample in Part 2). Our target is to *novelly* mitigate **challenging context dependency issues** in long-context tasks particularly those requiring complex reasoning. Thus, it is better if we can find a **simpler yet effective approach**. To this end, we explored various structures (WalkMaze, Merge, Debate, Group Discussion, etc.) and found that a variant of **decentralized well-existing chain communication and this simple approach can work more effectively** than others. Our contribution is not to claim a new topology for communication but to mitigate a challenging issue (long dependency) with a simple intuitive approach (decentralized chain communication). In addition, CoA is not in conflict with existing work as it can be considered as a plugin for other multi-agent LLM systems (e.g. serve as an additional stage in LongAgent) to improve them. Specifically, previous work for long input tasks, despite showing significant improvements, usually uses disentangling algorithms such as question decomposition, to address long dependency, and then they use centralized structure to answer the disentangled questions one by one. Different from them, CoA does not disentangle the question but leaves the entire question to the agents. Although chain communication is well-introduced in the literature, to the best of our knowledge, CoA is the first work to apply decentralized communication to long-term dependency challenges - please let us know if there is any paper we are missing on this. We show the high effectiveness of CoA in mitigating agent dependency issues for long inputs and we believe it would constitute a significant value add in the multi-agent system literature. We will better clarify these novelty points in the Introduction section of our paper. Below are the detailed answers to specific points: > various communication mechanisms exist in multi-agent systems Indeed, chain communication has been introduced as a very simple topology of communication. Our contribution is not to claim a new topology for communication but to mitigate the challenge of long dependency. We bring a novel perspective to the challenge with the simple chain communication mechanisms. Also, to the best of our knowledge, CoA is the first to use a decentralized structure on long input tasks (see table above). We note that no other multi-agent method in the literature has such results of outperforming RAG for long inputs. --- Rebuttal 3: Title: Reponses from Authors (Part 2) Comment: > the central agents in centralized communication essentially function as the communication units described in your paper. Although they all accomplish the task of communication, we think these two types of communication, centralized communication (e.g., LongAgent) and decentralized communication (our CoA), differ in their approach to solving long context dependency issues. When a centralized approach faces a multi-hop question, it leverages question decomposition to disentangle the long dependency issue, while decentralized communication leverages the interleaved reading-processing. Below is an example: ``` Question: Who is the grandson of A? Source: [1],[2],[3],[4] (chunks) Evidence in each chunk: [1: A’s husband is D], [2: A’s son is B], [3: No evidence], [4: B’s son is C] ``` ``` Centralized communication (e.g., LongAgent): Round 1: Manager: Who is the son of A? Worker with [2]: It is B! Others say unknown Round 2: Manager: Who is the son of B? Worker with [4]: It is C. Others say unknown Final answer: It is C. ``` In this approach, a worker with [i] and a worker with [i+1] do not communicate. Thus, it is difficult to deal with the dependency when the answer to the question is split into the end of agent i and the start of the agent i+1. ``` Decentralized communication (Our CoA): Manager: Who is the grandson of A? Workers: [1]: A’s husband is D (topic exploration), [2]: A’s son is B (answer first hop), [3]: A’s son is B (forward previous evidence), [4]: A’s son is B, B’s son is C. Thus, A’s grandson is C. (complete reasoning) Final answer: It is C ``` This exemplifies how adjacent workers communicating in CoA would differ from the centralized one as CoA agents receive previous evidence in addition to the question itself. We want to emphasize that LongAgent and our work are solving the long context problem in different ways, and they are not in conflict with each other. They can be merged into a much stronger framework (e.g., adding our chain reference stage to mitigate context dependency issues before the stages in LongAgent). > Could you clarify how CoA addresses this issue? From my understanding, CoA does not involve the decomposition of the input question. The key aspect of CoA decentralized communication is that workers can communicate so that the question does not need to be decomposed. Our experiments highlight the effectiveness of this design, especially when question decomposition is difficult. For instance, when the passage does not mention “A’s son” directly but mentions “A’s husband is D”, and mentions “D’s son is B”, the first question in the centralized approach should be “Who is A’s husband” rather than “Who is A’s son”, which is difficult to propose. > However, this further diminishes the novelty of the proposed method. We acknowledge that chain communication is not first used in our work. But we are the first ones that use chain communication for long dependency in the **input of long context** tasks and we show its effectiveness on generic tasks. This approach is independent of RecurrentGPT, as RecurrentGPT solves **long output**. For the output, the issue they mitigate is memory and planning for story generation tasks. Combining CoA and RecurrentGPT to solve Long-Input-Long-Output generation could be promising as well and we leave that to future work. > many components of the proposal are common practices in multi-agent system design and long-content generation/processing. We truly appreciate your thoughtful summary. We understand your concerns and would like to re-emphasize the value of our contributions: our work presents a novel and straightforward solution to the long-standing challenge of long context reasoning, and our results demonstrate significant effectiveness. We are hopeful that our findings will inspire further innovation in the research community, encouraging others to integrate our simple yet powerful components into their future work in multi-agent systems. > it should be comprehensively tested as a stronger baseline (e.g. LongAgent) across all benchmarks you tested. Thanks for the suggestion. Indeed, LongAgent is an important work in this direction. We try to compare all datasets in our paper. However, they did not open source their code, making it difficult to reproduce and compare with them fully in this short-time rebuttal window. Since we found that CoA is 15% higher than that of LongAgent (97% vs. 82%) on NIAH PLUS (Figure 2 in pdf), we believe that given the strong improvement, the performance of CoA will be promising. We will include detailed comparisons across all datasets in the final version of our paper. We hope our response has addressed your concerns and clarified the contributions of our work. We welcome any further questions you may have and would be happy to provide additional clarification if needed. We deeply appreciate the time and effort you've taken to provide feedback on our work. --- Rebuttal Comment 3.1: Comment: Thank you for the clear response, which effectively addresses my concerns regarding how the CoA distinguishes itself from existing multi-agent research. Including these discussions in the paper will enhance its quality and emphasize your contributions. I would like to raise my score to 5. --- Reply to Comment 3.1.1: Comment: We are really glad that our responses address your concerns! Thanks so much for your thoughtful questions and insightful discussion. We will carefully include our discussion in the final version. We deeply thank the effort you made during the whole process!
Summary: The paper "Chain of Agents: Large Language Models Collaborating on Long-Context Tasks" introduces a novel framework called Chain-of-Agents (CoA) to address the challenge of effectively processing long contexts in large language models (LLMs). The CoA framework leverages multi-agent collaboration, where multiple worker agents sequentially handle different segments of the text and a manager agent synthesizes these contributions into a coherent final output. This method aims to mitigate the limitations of input reduction and context window extension strategies by enabling effective information aggregation and context reasoning across various LLMs. The framework is evaluated on a range of long-context tasks, including question answering, summarization, and code completion, demonstrating significant improvements over existing methods. Strengths: - **Task Agnostic and Training Free**: CoA is a versatile framework that does not require task-specific training or fine-tuning, making it applicable to various long-context tasks without additional training overhead. - **Significant Performance Improvement**: The framework shows significant improvements, up to 10%, over strong baselines like Retrieval-Augmented Generation (RAG) and full-context models in multiple long-context tasks, including question answering, summarization, and code completion. - **Mitigation of Long-Context Issues**: CoA effectively addresses common issues associated with long contexts, such as the "lost in the middle" phenomenon, by assigning each agent a shorter context, thus maintaining focus on relevant information. Weaknesses: - **Complexity in Implementation**: Implementing a multi-agent system could be more complex and resource-intensive compared to single-agent systems. - **Communication Overhead**: The sequential communication between agents might introduce latency and inefficiencies. - **Evaluation Scope**: While the paper evaluates on multiple datasets, more diverse real-world applications could further validate the robustness of CoA, like NIAH test. Technical Quality: 2 Clarity: 2 Questions for Authors: - Could you test the CoA framework on smaller long-context models, such as qwen2-7b and llama3-8b, which both support an 8k context? I am curious to see how these smart, smaller models perform with CoA. - Regarding token cost, the input tokens are slightly higher than those in baseline methods. What about the output tokens? How are they related to the number of workers ($l$) and how each worker agent summarizes each chunk? - In Section 5.2, how do you handle the Full-200k method, which is the Vanilla (200k), when processing contexts over 200k length? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback. We appreciate your time and efforts spent on this paper. With your insightful suggestions, our paper can improve significantly. **Complexity in Implementation** Indeed, one of the design principles behind CoA is to propose a simple yet effective multi-agent system with multiple agents collaborating towards solving a complex task, unlike many other works on multi-agent system design. As can be inferred from the pseudocode in Algorithm 1 in the paper, the proposed approach is very straightforward to implement - with only O(100) lines of code for the end-to-end system. Also, due to its training-free nature, it is not necessary to prepare a training code for it, and the end-to-end system can be effectively operationalized in a highly controllable way. We will also release our code upon acceptance to further help the community implementing the approach. Besides, we compare the time complexity in GR 3, further demonstrating the practicality for deployment, for even scenarios with tight latency budgets. **Communication Overhead** We compare the time cost of full-context input and Chain-of Agents theoretically in a decoder-only setting. We assume the response generated by LLMs contains r tokens on average, the input has n tokens, and the context limit of LLM is k. The time complexity is shown in Table 2 (Appendix A). As can be seen, the encoding time of CoA is less than Full Context because $k << n$ in long context tasks, while they have the same decoding time. This demonstrates the efficiency of CoA compared with the Full-Context baseline. We have also conducted an experiment of communication cost and latency tests. We run an analysis to show the practical time consumption of the proposed CoA on the HotpotQA dataset. We choose to use LLaMA-3-8b as the backbone model for preventing additional latency due to the network of API queries. As can be seen in the table, Vanilla consumes the least amount of tokens and generates the least of tokens as well. However, it truncates the input thus maintaining a low performance. Although RAG generates fewer tokens than CoA (by adding RAG and downstream input together), it needs to read the whole input by Retrieval models which is also time-consuming. Overall, RAG is faster than CoA by only ~30% in this example. | | Running Time (s) | Avg. # of Input | Avg. # of Output | Avg. # Agent Output | |--------------|------------------|--------------|--------------|--------------| | Vanilla (8k) | 1.33 | 5,912.85 | 2.40 | 2.40 | | RAG (8k) | 2.41 | 16,479.91 | 2.75 | 2.75 | | CoA (8k) | 3.10 | 10,840.95 | 38.38 | 11.30 | **Parallel decoding analysis**: For decoder-only LLMs, the agents can run in parallel. Before CUs are produced, agents can start to read their assigned paragraphs and wait for the CUs to come. Unfortunately, current APIs or models do not support such dynamic reading. Thus, we conduct an approximation by asking the model to generate one token for each sample. The output time is shown to be short and negligible, mimicking the encoding time of each sample. We have found that the running time of CoA can be reduced by 57.21% on average, leading to a 1.32-second running time for each sample, closing to the running time of the Vanilla baseline! We will integrate discussions on this speedup approach in the final version of the paper. **Evaluation Scope**: We have further conducted evaluations on the NIAH test. We follow the LongAgent [1] paper to run a NIAH PLUS test for evaluating the long context understanding capability of LLMs. Different from the original NeedleInAHaystack test, NIAH PLUS is more challenging because LLMs need to answer the questions rather than simply retrieve the information (needle). The results are shown as follows: | NIAH Test | Accuracy | |-----------------|----------| | GPT-4 (128k) | 62.00 | | LongAgent | 81.53 | | text-bison (8k) | 26.00 | | CoA (8k) | **97.80** | As can be seen, the CoA greatly increases the accuracy of Text-bison from 26.0% to 97.8%, showing that CoA significantly increases the capability of LLMs to understand long contexts. We also append the figures of NIAH test results in the attached pdf. [1] Zhao J, Zu C, Xu H, et al. LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration[J]. arXiv preprint arXiv:2402.11550, 2024. Q1 **Performance on Smaller long context models**: We have added experiments with Llama-3-8b on the NarrativeQA dataset, and the results are listed in the table below. As can be seen, the CoA framework on such LLMs can also boost the Vanilla performance a lot by 10.27% and surpass the RAG score by 5.9%! Noting that there is more space to improve since the prompt of LLama-3 is not even adjusted for this experiment. | Model | F1 | |--------------|-------| | Vanilla (8k) | 8.78 | | RAG (8k) | 13.15 | | CoA (8k) | **19.05** | Q2 **Output Token Cost**: Regarding output tokens, we compute the average token generated by the model in General Response 3. As shown in the table, CoA outputs more tokens than vanilla and RAG baselines because it needs to produce CUs and final results. We found that the relation between generated tokens and workers holds an almost linear correlation, showing that basically, each worker generates the same amount of tokens. Including one more agent will increase around 11.3 total generated tokens. Q3 **Full-200k Chunking**: It is the same as Vanilla (8k). Please refer to the Algorithm in pdf. We first split the source into $n$ sentences, then we add $i$-th sentence to the input one by one so that the total length is less than the context window limit while $i+1$ sentences are longer than the total length. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the response. My concerns are adequately addressed. Thus I have improved my score to 5. --- Rebuttal 2: Comment: Dear Reviewer, Since we are approaching the end of the discussion period, we are wondering if you have had the chance to review our response to your feedback. We would like to kindly inquire about the extent to which we have successfully addressed the concerns outlined in your review. We greatly value your feedback and would appreciate any further questions or comments you might have. Thank you for your time and consideration. Sincerely, All Authors --- Rebuttal 3: Comment: We are glad to know our responses addressed your concerns! We would also like to thank you for your thoughtful questions and insightful discussion, and we will include all our discussion results in the final version.
Summary: The paper proposed Chain-of-Agents, a multi-agent LLM collaboration framework for solving long context tasks, where multiple worker agents sequentially comprehend and communicate to handle different segmented portions of the text, and a manager agent, at last, synthesizes these contributions into a coherent final output. The paper conducted experiments on 9 long-context tasks in qa, summarization, and code completion, showing a significant performance gain over vanilla long-context LMs and RAG baselines. The paper also provides abundant analysis. Strengths: - The proposed method is intuitive. - The experiments are comprehensive, covering 6 different LLMs as backbones and evaluated across 9 benchmarks. - The results are good. - The paper provided useful analysis for more insights. - The paper is well-written. Weaknesses: - There is no support in the paper for the claim in L35 "inspired by human-like processing of long-context task", weakening the motivation. - Results on BookSum are missing in the main table, and no RAG baselines are provided on this benchmark. - Though the overall results are promising, I'm curious whether there is some information loss during the sequential "read and consume", and how this may affect the performance and the design choice. - For the case study in Figure 5, for worker 1, there seems to be more than one line of clues to answer the query (a.k.a other space missions such as "Voyager"), which may result in a complicated reasoning graph, how is this phenomenon dealt with the proposed method? - As also noted in the paper, there is no interactive communication between the agents, but only uni-directional and one-time. Whether the approach could be considered "communication" is debatable. - Although the time complexity, in theory, is true for Table 2, how's the actual inference speed on GPU considering the overhead of multiple times of extra prompting and decoding? - It might be helpful to also discuss the related works on communication between language agents, e.g. - Camel: Communicative agents for" mind" exploration of large language model society, NeurIPS 23 - Building Cooperative Embodied Agents Modularly with Large Language Models, ICLR 24 Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As the paper also noted, the communication effectiveness and inference efficiency could be further improved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your detailed feedback and for acknowledging the comprehensiveness of our experimental studies. We address the questions and concerns raised below, one by one. W1: **Human motivation**: To clarify this point, the motivation is that humans would not try to read the whole textbook and then start to do reasoning. Instead, it is better for them to learn one section and do exercises over it then move to the other one with the memory of the previous section because the human has limited working memory [1], similar to LLMs. This inspires proposing an approach based on interleaved reading, as one of the fundamental principles of CoA, rather than putting all information in one window and processing the information (read-then-process). W2: **BookSum results**: Thanks for the great suggestion. Following the suggestion, we have run the BookSum dataset with Text-bison and Text-unicorn models. As shown in the table below, similar to the results with the LLMs that support longer contexts, CoA outperforms baselines by a significant margin. We will add these results and discussions to the paper. | | text-bison | text-unicorn | |------------- |-----------|-------------| | Vanilla (8k) | 9.13 | 8.15 | | RAG (8k) | 9.38 | 8.01 | | CoA (8k) | **14.51** | **14.41** | W3: **information loss**: We have added an analysis to further probe the information loss. For each sample, we compute the highest score between the communication unit and the gold answer, and if this score is higher than the score of the final prediction, we compute the difference and refer this information loss, as formulated below: $$ Loss = max(0, \max_{i=0}^{l} score(CU_i, Y) - score(\hat{Y}, Y)) $$ The estimated information loss results of various datasets of CoA withext-bison are as follows: | | HotpotQA | MuSiQue | Qasper | NarrativeQA | QMSum | |------------------|----------|---------|--------|-------------|--------| | performance | 54.87 | 40.38 | 37.03 | 24.04 | 17.01 | | Information Loss | 1.46 | 1.65 | 3.92 | 3.88 | 0.91 | This table shows that if choosing the communication unit with the highest performance, around 1%-4% performance gain can be obtained, meaning that 1%-4% of the information is lost during the chain communication. While every method might have information loss, COA design ensures final results yielding negligible information loss. However, although sometimes the information is lost, "it is still acceptable within the overall framework as we tested using all CUs as the input to the manager and observed a 5% performance drop (Page 4, footnote). We leave mechanisms to further mitigate information loss to future work of multi-agent communication. W4: **Handling complex reasoning**: We do not give LLMs a predefined plan. When the CoA starts to process, the agents automatically explore the nodes and leave the processed information for the next agent. However, we indeed observed some patterns: 1) the first worker usually explores the most number of paths, including 3 or more topics because the agent is not sure about the result and willing to provide more information for the next ones. 2) The final worker usually narrows down to one answer with a much shorter CU, and 3) The speed of narrowing down will be faster for simpler samples, producing a small reasoning graph. W5: **Agent communication one directional or more**: Thanks for pointing this out. We agree that we have agent-to-agent communication in one direction and the communication is critical for performance. We will clarify the definition in the next version, such as renaming it as a Unidirectional Communication Unit. Regarding the design choices, we used unidirectional rather than bi-directional (or more complex communication) because experiments show that the bi-directional (or more complex communication) communication of two agents sometimes brings more noise and hallucinations thus hurting the performance. W6 **Actual inference time**: We further analyze the actual time cost of running samples. Please refer to GR 3 for more details. W7 **Related work discussion**: Thanks. We will add the following papers and compare them with them in the next version. [1] Cowan N. Working memory underpins cognitive development, learning, and education[J]. Educational psychology review, 2014, 26: 197-223. --- Rebuttal 2: Comment: Dear Reviewer, Since we are approaching the end of the discussion period, we are wondering if you have had the chance to review our response to your feedback. We would like to kindly inquire about the extent to which we have successfully addressed the concerns outlined in your review. We greatly value your feedback and would appreciate any further questions or comments you might have. Thank you for your time and consideration. Sincerely, All Authors
Summary: This paper is about addressing the issue of lengthy inputs when using language models. Predominant approach is RAG, but is hampered by retrieval performance. Window extension extends the architecture of the model to handle lengthy inputs, but doesn’t guarantee that the model is able to extract the relevant information from lengthy inputs. This work proposes a simple approach where text is split into multiple chunks and the chunks are sequentially processed, left-to-right, where information from all chunks observed so far is consolidated into a summary. The response to a query (if there is one) is calculated based on the aggregate summary of the text. Despite the simplicity of the approach, the method performs well across various models and tasks. Strengths: * Idea is quite simple and seems top be effective * Positive results on many tasks and benchmarks * It is interesting that even long context models capable of processing lengthy inputs benefit from the proposed method, which reinforces the hypothesis that sifting the relevant information from lengthy inputs directly can be difficult. Consistent improvements are observed on Claude models. * Appreciate the ablations including effect of direction (left-to-right, right-to-levet) and lost in the middle experiment. * Seems to work especially well when the inputs are long. Weaknesses: * Technical depth is limited * Framing/presentation is somewhat misleading - From Figure 1, is summarization all that’s being done? Is CoA is just a fancy term to describe this sequential summarization process? - ‘Chain of Agents’ makes it sound like there are different agents doing different tasks, which is misleading * Fairly local view of performance results. Only comparisons against RAG/Vanilla baselines are presented. How about comparisons to best published numbers on these datasets? * Clarity issues - Figure 1 does not explain the architecture well - 127: W1 generates related evidence useful for answering question - Without knowing CU1, the reader cannot verify this. Basically unable to follow the discussion in 126-134 without knowing what CU1,2,3 are. Technical Quality: 3 Clarity: 3 Questions for Authors: * How sensitive is the model to choice of the size of chunks? * Was the RAG baseline properly optimized? * Table 2 should also compare against RAG models. For some practical applications, RAG could be more beneficial due to inference speed. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations were discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback that has helped us to improve our submission! W1: **Technique depth**. The depth of our work lies in that we propose Chain-of-Agents, a multi-agent LLM collaboration framework for solving long context tasks. It is a training free, task/length agnostic, interpretable, and cost-effective framework. CoA (1) can generalize to many tasks, (2) achieve considerable improvements, (3) while the framework is simple, including diverse communication patterns. Our experiments show that Chain-of-Agents outperforms commonly-used RAG and Long Context LLMs by a large margin despite its simple design. W2 (1) **Is summarization all that’s being done?** We would like to clarify that CoA’s applicability is not restricted to the summarization task. While the communicated results summarize the contents in the source, CoA agents play various roles beyond summarization, because they perform content selection, information aggregation, context utilization, and reasoning abilities to solve complex problems from diverse tasks. such as extracting and refining information through agent communication, and performing analysis and reasoning with them. Tables 11-13 exemplify CoA agents' capabilities to generate evidence for answering the query, a summary of chunks, and code comments and usages. W2 (2) **Naming as Chain of Agents**: We name it an \it{agent} for two reasons. First, chain-of-agents contain two different roles, including manager agent and worker agents. Worker agents analyze, aggregate, and digest segment information, whereas manager agents synthesize these contributions into a coherent final output worker. Second, we employ agents to emphasize “communication” between agents. We will clarify this. W3: **Comparison with SOTA**: We choose RAG and Full-context as baselines because they represent the commonly-used and solid approaches of input reduction and window extension. Thanks for your great advice. We have further added comparisons of CoA with the previous SOTA below, even including the ones requiring training (indicated with *). As can be seen, CoA achieves better or comparable results on all datasets, improving HotpotQA and RepoBench-p by a large margin. The performance on some datasets is much lower than SOTA because the datasets are trained on some domain-specific models, such as the Qasper dataset | | HotpotQA | MuSique | Qasper | NarrativeQA | Quality | QMsum | GovReport | BookSum | RepoBench-P | |---------------|----------|----------|----------------|-------------|----------|------------|-----------|---------|-------------| | Previous Best | 54.4 [1] | 40.4 [1] | 53.9* [2] | 26 [1] | 89.2 [3] | 22.44* [2] | 26.3 [3] | 18.5*[4] | 56.47 [1] | | Ours Best | 62.04 | 42.49 | 38.01 | 25.26 | 83.8 | 17.67 | 26.98 | 17.47 | 73.05 | W4 **Clarification of Figure 1**: We have revised Figure 1 in pdf. The new figure clarifies the working flow and communication units. It is worth noting that the blue boxes on the left are CUs themselves. Q1 **Context window sizes**: We show that the CoA becomes stable when the context window is larger than 8k, We also find it can improve the baseline with various window sizes. As shown in Figure 6, the performance becomes stable when it is larger than 8k. We have further analyzed different agent lengths and reported the scores of text-bison on the NarrativeQA dataset in the following table. Our results show that it can improve the baseline with various window sizes. Thus, we choose 8k for short context models, such as text-bison. | Model\context len | 4k | 8k | 16k | 32k | |-------------------|-------|-------|-------|-------| | text-bison-32k | 45.69 | 53.55 | 59.14 | 48.54 | | Ours (same base) | 54.95 | 60.34 | 63.11 | 50.25 | Q2 **Is RAG well-optimized?**: The specific RAG [5] implementation we use is indeed well-optimized. It is the SOTA model from the RAG MTEB leaderboard on Hugging Face, and we follow the recent approaches that combine RAG and LLM for the best prompting. Moreover, the RAG model is fine-tuned on the HotpotQA dataset, making it even better adapted for the tasks as the HotpotQA dataset is evaluated in this paper. Overall, CoA outperforming such an optimized RAG implementation highlights the effectiveness. Q3 **Time complexity analysis**: We agree that RAG can be more beneficial due to inference speed. However, it hurts the performance because semantic similarity cannot ensure retrieval of the needed information. It is difficult for RAG models to answer the question that needs multiple reasoning hops (such as multihop-QA) or entire input tokens (such as counting tokens in a passage). We compare the time complexity of RAG with CoA and will add the table in the final version. $k’$ is the chunk size of the RAG model, $k$ is the context window of downstream LLMs, $r$ is the average response length and $n$ is the input source length. We have also compared the time cost and show that RAG is around 30% faster than CoA. Moreover, CoA can be further improved by parallel reading. Please refer to GR 3 for details of inference speed analysis. | | Encode | Decode | |-----|-----------------|--------------| | RAG | O(nk') + O(k^2) | O(n) + O(kr) | | CoA | O(nk) | O(nr) | [1] Jin et al. Llm maybe longlm: Self-extend llm context window without tuning. ICML 2024. [2] Shaham et al. Scrolls: Standardized comparison over long language sequences. EMNLP 2022. [3] Shaham et al. Zeroscrolls: A zero-shot benchmark for long text understanding. EMNLP Findings 2023 [4] Pu D, Demberg V. RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization. NAACL 2024. [5] Xiao S, Liu Z, Zhang P, et al. C-pack: Packaged resources to advance general chinese embedding. SIGIR 2024. --- Rebuttal 2: Comment: Dear Reviewer, Since we are approaching the end of the discussion period, we are wondering if you have had the chance to review our response to your feedback. We would like to kindly inquire about the extent to which we have successfully addressed the concerns outlined in your review. We greatly value your feedback and would appreciate any further questions or comments you might have. Thank you for your time and consideration. Sincerely, All Authors
Rebuttal 1: Rebuttal: We thank all the valuable feedback and comments from reviewers that have helped to improve our paper! We also thank the reviewers for appreciating the intuitive and interesting design of the CoA (Reviewer vGWD, FiRU, FfHC, KMmx), the effectiveness of CoA (Reviewer vGWD, FiRU, FfHC, KMmx), comprehensiveness of the evaluation (Reviewer vGWD, FiRU, FfHC), interesting analysis (Reviewer vGWD, FiRU, FfHC) as well as great presentation and writing (Reviewer FiRU, KMmx). We address several key questions in the following paragraphs: GR 1 **Novelty and Technical Depth** While chunking the input into multiple segments seems intuitive, the novelty of CoA lies in the chain communication of multiple agents using natural language on inputs, which is not explored by previous studies, demonstrating that such a simple yet effective framework with strong performance can create a higher impact. LongAgent does not allow communication between agents, making it hard for each agent to understand what is happening at others, hurting the performance of tasks that need long dependency. Besides, the authors of LongAgent state that the decomposition of the input question also is a limitation for LongAgents. The failure of decomposition multi-hop or complex questions would directly lead to wrong answers. Additionally, some of the structures of LongAgents are transferable to CoA, such as conflict resolution. In contrast, CoA concentrates on improving the long dependency reasoning (such as multi-hop question answering) and does not depend on the decomposition capability of the manager. Regarding RecurrentGPT, although it uses a chain structure to do long story generation, the task and motivation are different. They focus on memorizing the context to have a better plan to generate a longer story. In contrast, CoA focuses on the long-dependency and complex reasoning of the source text rather than generation. The depth of our work lies in that we propose Chain-of-Agents, a multi-agent LLM collaboration framework for solving long context tasks. It is a training-free, task/length agnostic, interpretable, and cost-effective framework. While the framework is conceptually straightforward, CoA can (1) generalize to many tasks, (2) achieve considerable improvements, and (3) demonstrate diverse communication patterns. Our experiments show that CoA outperforms commonly-used RAG and Long Context LLMs by a large margin despite its simple design. Analysis shows that by integrating information aggregation and context reasoning, CoA mitigates lost-in-the-middle phenomenon effectively, and performs better on longer samples. GR 2 **Evaluation Scope** We used nine representative, diverse datasets for three broad categories of long context tasks, and we believe the baselines used are already strong given the literature in this direction. As described in individual responses, we will include comparisons with more baselines including WalkingMaze, LongAgents, RecurrentGPT, and other SOTA models in our final paper. These additional results further corroborate consistent and significant performance improvements of CoA. We follow the LongAgent paper to run a NIAH PLUS test for evaluating the long context understanding capability of LLMs. Different from the original NeedleInAHaystack test, NIAH PLUS is more challenging because LLMs need to answer the questions rather than simply retrieve the information (needle). The results are shown as follows: | NIAH PLUS Test | Accuracy | |-----------------|----------| | GPT-4 (128k) | 62.00 | | LongAgent | 81.53 | | text-bison (8k) | 26.00 | | CoA (8k) | **97.80** | As can be seen, CoA greatly increases the accuracy of Text-bison from 26.0% to 97.8%, showing that CoA significantly increases the capability of LLMs to understand long contexts. We also append the figures of NIAH test results in the attached pdf. GR 3 **Practical Time Complexity** We run an analysis to show the practical time consumption of the proposed CoA on HotpotQA dataset. We choose to use LLaMA-3-8b as the backbone model for preventing additional latency due to the network of API queries. As can be seen in the table, Vanilla consumes the least amount of tokens and generates the least of tokens as well. However, it truncates the input thus maintaining a low performance. Although RAG generates fewer tokens than CoA (by adding RAG and downstream input together), it needs to read the whole input by Retrieval models which is also time-consuming. Overall, RAG is faster than CoA by only ~30% in this example. | | Running Time (s) | Avg. # of Input | Avg. # of Output | Avg. # Agent Output | |--------------|------------------|--------------|--------------|--------------| | Vanilla (8k) | 1.33 | 5,912.85 | 2.40 | 2.40 | | RAG (8k) | 2.41 | 16,479.91 | 2.75 | 2.75 | | CoA (8k) | 3.10 | 10,840.95 | 38.38 | 11.30 | **Parallel decoding analysis**: For decoder-only LLMs, CoA agents can run in parallel. Before CUs are produced, agents can start to read their assigned paragraphs and wait for the CUs to come. Unfortunately, current APIs or models do not support such dynamic reading. Thus, we conduct an approximation by asking the model to generate one token for each sample. The output time is shown to be short and negligible, mimicking the encoding time of each sample. We have found that the running time of CoA can be reduced by **57.21%** on average, leading to a 1.32-second running time for each sample, closing to the running time of the Vanilla baseline! We will integrate discussions on this speedup approach in the final version of the paper. Pdf: /pdf/e14bf20b09a4d12814d4c2c093cf3804f11505b6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Two-way Deconfounder for Off-policy Evaluation in Causal Reinforcement Learning
Accept (poster)
Summary: This paper considers a setting in which there are observed tuples $(O_{i,t}, A_{i, t}, R_{i, t})$ with $ 1\leq i \leq N$, $1 \leq t \leq T$; Here, $O_{i,t}$ represents observable quantities, $A_{i,t}$ is an action generated by a behavioural policy, and $R_{i,t}$ is the associated reward. The goal of this paper is off-policy evaluation; specifically, the authors aim to estimate the expected cumulative reward under a given policy that differs from the behavioural policy. The authors propose a generative model of the observed trajectories. The specific contribution of this work is to consider a two-way confounder, which as I understand a tuple of latent variables $(U_i, W_t)$ representing subject-and time-varying components, respectively. Strengths: The proposed two-way confounding model can be considered original. The authors considers separating a confounder into a time- and subject-invariant parts, and model their interaction by a neural network. This may be considered as a low-rank approximation (analogously to nonlinear matrix-factorisation), and reduces the number of the latent variables from $NT$ to $N + T$. The empirical evaluation shows that the proposed method performs well. The authors also experimentally demonstrates the sensitivity to the two-way confounder assumption. Weaknesses: # Clarity ## Major issues: 1. Line 104: It is unclear what $\mathbb{E}^\pi$ denotes. If the expectation is simply taken with respect to the dynamics generated by $\pi$, doesn't this have an effect from the confounders? 2. Introducing the estimator (1) before the model would help the reader to understand the quantities to be estimated and hence the model. 3. The assumption on the data generating process is not straightforward to understand. In the unconstrained unmeasured confounding (UUC) setting, the trajectories cannot be independent, and the data assumption (Line 89) alone already excludes the UUC setting. At the same time, for the proposed TWUC setting, the presence of the subject-invariant latent $W_t$ renders the trajectories dependent on each other. Using a graphical model would help clarify the assumption. ## Minor issues: 1. The $\cup$ notation is undefined. 2. Line 187: Gaussian Mixture Model. The transition kernel does not look like a mixture. Why the term? 3. L 287: forr # Theoretical justification While I realise that the focus of the paper is on the empirical performance, the supporting theory is relatively weak. 1. The validity of the estimator (1) is claimed, but there is not reference to this claim. Moreover, the expectation needs to be estimated, and calling (1) an estimator sound odd to me. 2. I have not been able to read though the proof, but the first paragraph already casts some doubt. Why does estimand $\eta$ depend on the latent and why are they not marginalised? (this might explain what I don't understand about the assumption). It seems that the only variability comes from the initial observation rather than the whole trajectories (since the confounders are fixed). The consistency result therefore has limited reliability, Technical Quality: 3 Clarity: 2 Questions for Authors: NA Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - __Clarification of the DGP.__\ We agree with your points and will replace "independent trajectories" with "trajectories". "Independent" was used to indicate conditional independence between trajectories given latent two-way fixed effects, but this was confusing. Following your suggestion, we use different use different graphical models to clarify DGP under different assumptions in Figure 1 of the PDF file. We will include this figure if our paper is accepted. - __$\eta^\pi$'s dependence on latent variables and proof sketch.__\ To validate our theory and save your time, we provide an outline of our proof below. We first clarify $\eta^\pi$'s dependence on latent variables: 1. We assume two-way unmeasured confounders are fixed, aligning with existing literature on two-way fixed effects regression. 2. While latent factors $U_i$s and $W_t$s are considered fixed, not all observations $O_{i,t}$s are fixed. Thus, **the variability of $R_{i,t}$ comes from the initial observation and all subsequent intermediate observations along the trajectory**, generated randomly by the transition function. 3. The estimation error of $\eta^\pi$ arises from variability in the initial observation, estimated transition dynamics, latent factors, and distributional shift between behavior and target policy, as highlighted in the first two terms of our error bound in Theorem 1. Meanwhile, our theory can also accommodate random two-way unmeasured confounders by bounding the deviation between $(NT)^{-1}\sum_{i,t} \mathbb{E}^\pi[R_{i,t}|U_i,\\{W_{t'}\\}\_{t'=1}^t]$ and $T^{-1}\sum_t \mathbb{E}^\pi[R_{1,t}]$. This deviation can be decomposed into two terms: $$\Big[\frac{1}{NT}\sum_{i,t} \mathbb{E}^\pi[R_{i,t}|U_i,\\{W_{t'}\\}\_{t'=1}^t]-\frac{1}{T}\sum_t \mathbb{E}^\pi[R_{1,t}|\\{W_{t'}\\}\_{t'=1}^t]\Big]+\Big[\frac{1}{T}\sum_t\mathbb{E}^\pi[R_{1,t}|\\{W_{t'}\\}\_{t'=1}^t]-\frac{1}{T}\sum_t\mathbb{E}^\pi[R_{1,t}]\Big].$$ Assuming $U_i$s are i.i.d. and independent from $W_t$s, we apply Hoeffding's inequality to the first term, resulting in an order of $R_{max}\sqrt{2N^{-1}\log (2/\delta)}$ with probability at least $1-\delta$. The second term requires assuming that the time series exhibits certain mixing properties (e.g., $\beta$-mixing), allowing us to treat each $R_{1,t}$ as dependent only on recent $W_{t'}$s. Using Berbee’s coupling lemma (Berbee 1987) and approximating this term with sums of i.i.d. random variables, we apply Hoeffding's inequality again to establish the tail inequality for the second term. We are happy to revise our theory and proof to incorporate these changes if our paper is accpeted. Finally, to facilitate understanding of Theorem 1, we provide a _proof sketch_. We decompose $|\hat{\eta}^\pi - \eta^\pi|$ into two terms, $I_1$ and $I_2$. $I_2$ represents the absolute difference between the mean of the value function and its expectation, given the latent confounders and transition function are true values. The only difference between the two terms is the expectation over the initial state. Based on the boundedness of the value function (Assumption 2) and the conditional independence of each trajectory given all latent confounders, we can apply Hoeffding's inequality. This gives $I_2$ an order of $R_{max}\sqrt{2N^{-1}\log (2/\delta)}$ with probability at least $1-\delta$. $I_1$ represents the absolute difference between the estimator $\hat{\eta}^{\pi}$ (the mean of the estimated value function) and the mean of the true value function. Using the Bellman equation, we transform the difference into a function of differences between the estimated and true values of the transition function, i.e., a function of $\operatorname{TV}(\hat{\mathcal{P}}(\bullet|a_t,o_t,\hat{u}\_i,\hat{w}\_t)-\mathcal{P}(\bullet|a_t,o_t,u_i,w_t))$. Furthermore, the function of $\operatorname{TV}(\hat{\mathcal{P}}(\bullet|a_t,o_t,\hat{u}\_i,\hat{w}\_t)-\mathcal{P}(\bullet|a_t,o_t,u_i,w_t))$ can be decomposed into the sum of the functions of $\operatorname{TV}(\hat{\mathcal{P}}(\bullet|a_t,o_t,\hat{u}\_i,\hat{w}\_t)-\hat{\mathcal{P}}(\bullet|a_t,o_t,u_i,w_t))$ (denoted as $I_3$) and $\operatorname{TV}(\hat{\mathcal{P}}(\bullet|a_t,o_t,u_i,w_t)-\mathcal{P}(\bullet|a_t,o_t,u_i,w_t))$ (denoted as $I_4$). Based on Assumption 4 and the Lipschitz continuity of the neural network, we can determine that the order of $I_3$ is $\varepsilon_{U,W,\delta}$ with probability at least $1-\delta$. Finally, by leveraging Assumptions 1 and 3, we find that the order of $I_4$ is $\varepsilon_{\mathcal{P},\delta}$ with probability at least $1-\delta$. Combining this with all previously derived conclusions, we can then establish the order of $|\hat{\eta}^\pi-\eta^\pi|$ with probability at least $1-3\delta$. - __Equation (1).__ - __Is (1) an estimator?__: As you commented, (1) is not our final estimator, since the expectation needs to be approximated based on the Monte Carlo method detailed on Page 5, Lines 205 to 210. It serves as an **intermediate estimator** that is unbiased to the evaluation target (see our justification of the unbiasedness in the next response). We will make this clear to avoid potential confusion. - __Validity of (1)__: We claimed Equation (1) is unbiased to $\eta^\pi$ in Line 200. Demonstrating this unbiasedness is straightforward: Recall that $\eta^\pi$ is the average expected reward across $N$ individuals over $T$ time points. Applying the law of total expectation, it can be expressed as $(NT)^{-1}\sum_{i,t}\mathbb{E}^\pi[R_{i,t}|O_{i,1},U_i,\\{W_{t'}\\}\_{t'=1}^t]$, which equals Equation (1). - __Introducing (1) earlier__: Following your suggestion, we will introduce (1) before the model if our paper is accepted. - __Minor issues.__\ We apologize for the typos. In line 126, we will revise $Z_{i,t}=U_i\cup W_t$ to $Z_{i,t}=(U_i^\top,W_t^\top)^\top$, to avoid using the undefined notation $\cup$. Additionally, we will replace 'Gaussian Mixture Model' with 'Gaussian Model' in line 187. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications. As a follow-up comment, I think it is confusing to analyse the case where the latent variables are fixed given that the objective does not contain any latent (as in the general response), and this should be corrected (as proposed). The theoretical analysis currently does not seem to be well linked to the proposed training method, it remains unclear when the four assumptions can be achieved. For this reason, I still find it difficult to provide a strong support for acceptance, while I appreciate the empirical nature (and performance) of the present work. --- Reply to Comment 1.1.1: Comment: We sincerely thank the referee for the positive feedback on the empirical nature and performance of our paper. Below, we address the remaining comments on our theoretical analysis. - __Latent v.s. fixed variables.__ We will revise this as suggested. As outlined in the rebuttal, the error bound remains similar to Theorem 1, with an additional term of $c_2 R_\max (N^{-1/2}+T^{-1/2})$ up to some logarithmic factors, to account for the randomness of the latent factors. We hope this clarification resolves your concerns. - __Clarification on the assumptions.__ This issue was not raised in the official review, but we believe Assumptions 1–4 are mild and achievable. We hope our response addresses your concerns. * Assumptions 1 and 2 are standard in the reinforcement learning literature (e.g., [1], [2], [3], [4], [11], [12], [13], [14]). Though space limits citations, these assumptions are widely applied in offline policy optimization and off-policy evaluation. * Assumption 3 is concerned with the estimation error of the transition function. This assumption is flexible, as $\varepsilon_{\mathcal{P},\delta}$ can be adjusted to a larger value to meet the condition. In our implementation, we use a conditional Gaussian model for the transition function, following [6]. The total variation bound, measured by $\varepsilon_{\mathcal{P},\delta}$, thus reduces to the estimation errors of the mean and covariance functions. Using neural networks for function approximation allows the estimator to achieve an optimal non-parametric convergence rate (e.g., [7], [8], [9]). This yields the specific form of $\varepsilon_{\mathcal{P},\delta}$. Even without the conditional Gaussian model, deep generative learning algorithms with theoretical guarantees can be employed, with error bounds provided in studies like [10]. * Assumption 4 is concerned with the estimation error of the latent confounders. Like Assumption 3, it is flexible, as $\varepsilon_{U,W,\delta}$ can be adjusted to a larger value. Additionally, under a two-way additive model assumption, the factors $\\{U\_i\\}\_i$ and $\\{W\_t\\}\_t$ can be estimated at orders of $\sqrt{T^{-1/2}\log (N/\delta)}$ and $\sqrt{N^{-1/2}\log (T/\delta)}$ with probabilities at least $1-O(\delta)$ (see, [15]). This provides the detailed forms of $\varepsilon_{U,W,\delta}$. [1] Chen J, Jiang N. Information-theoretic considerations in batch reinforcement learning[C]//International Conference on Machine Learning. PMLR, 2019: 1042-1051. \ [2] Fan J, Wang Z, Xie Y, et al. A theoretical analysis of deep Q-learning[C]//Learning for dynamics and control. PMLR, 2020: 486-489.\ [3] Liu Y, Swaminathan A, Agarwal A, et al. Provably good batch off-policy reinforcement learning without great exploration[J]. Advances in neural information processing systems, 2020, 33: 1264-1274. \ [4] Uehara M, Sun W. Pessimistic model-based offline reinforcement learning under partial coverage[J]. arXiv preprint arXiv:2107.06226, 2021. \ [5] Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators[J]. Neural networks, 1989, 2(5): 359-366.\ [6] Yu, T., Thomas, G., Yu, L., Ermon, S., Zou, J.Y., Levine, S., Finn, C. and Ma, T., 2020. Mopo: Model-based offline policy optimization. Advances in Neural Information Processing Systems, 33, pp.14129-14142.\ [7] Schmidt-Hieber, J. (2020). Nonparametric regression using deep neural networks with ReLU activation function. The Annals of Statistics, 48(4).\ [8] Farrell, M. H., Liang, T., & Misra, S. (2021). Deep neural networks for estimation and inference. Econometrica, 89(1), 181-213.\ [9] Imaizumi, M., & Fukumizu, K. (2019, April). Deep neural networks learn non-smooth functions effectively. In The 22nd international conference on artificial intelligence and statistics (pp. 869-878). PMLR.\ [10] Zhou, Y., Shi, C., Li, L., & Yao, Q. (2023). Testing for the Markov property in time series via deep conditional generative learning. Journal of the Royal Statistical Society Series B: Statistical Methodology, 85(4), 1204-1222.\ [11] Rashidinejad P, Zhu B, Ma C, et al. Bridging offline reinforcement learning and imitation learning: A tale of pessimism[J]. Advances in Neural Information Processing Systems, 2021, 34: 11702-11716.\ [12] Yin M, Wang Y X. Towards instance-optimal offline reinforcement learning with pessimism[J]. Advances in neural information processing systems, 2021, 34: 4065-4078.\ [13] Cui Q, Du S S. Provably efficient offline multi-agent reinforcement learning via strategy-wise bonus[J]. Advances in Neural Information Processing Systems, 2022, 35: 11739-11751.\ [14] Wang X, Cui Q, Du S S. On gap-dependent bounds for offline reinforcement learning[J]. Advances in Neural Information Processing Systems, 2022, 35: 14865-14877.\ [15] Bian, Z., Shi, C., Qi, Z., & Wang, L. (2023). Off-policy evaluation in doubly inhomogeneous environments. arXiv preprint arXiv:2306.08719.
Summary: This work proposes a novel technique for performing off-policy evaluation (OPE) in the presence of unobserved confounders that have been classified as "two-way" unmeasured confounders, viz., by assuming that there exist both time-invariant and trajectory-invariant confounders, but not time-and-trajectory-invariant confounders. Roughly outlined, their approach is to first use a network architecture that takes embeddings of these different confounder types, uses a neural tensor network to first obtain estimates of transitions from a transition network and actions from an actor network, and then performs OPE in standard Monte Carlo simulation by estimating the expected cumulative reward from many simulated trajectories. Experiments compare their TWD technique's LMSE and LMAE against a host of traditional and modern techniques that span numerous means of dealing with confounding from traditional model-based methods to POMDPs to TWDIDPs. Across several simulation studies using principled data generation (like the PK-PD model for tumor growth generation), the TWD technique is shown to be the most robust and obtain lower LMSE and LMAE compared to the others compared. Ablation studies following these show how the sum of the TWD components are needed to obtain maximal accuracy. Strengths: - The review of related work is particularly rich; the authors do a tremendous job at finding recent and relevant work related to the problem space and the various viewpoints in approaching deconfounding. - The writing is exceptionally clear, with well-stated assumptions, defined variables, and illustrations of process in each figure. - Classifying the types of transience to which UCs belong is interesting and exploited in a nice way within the TWD technique. - I'm a fan of the idea of performing MC simulation using the learned transition network to accomplish the OPE. I found the elaboration on line [205] to be particularly compelling. - Compared strategies in Section 5 are comprehensive and recent. Weaknesses: - [197] One minor point that likely deserves some consideration in general: the causal structure of the system, viz., that certain contexts O should *not* be included in evidence while estimating \eta^\pi -- for example, considering the classic confounding M structure (see Cinelli, Forney, & Pearl, 2022), there are cases where an observed context must be omitted from conditioned evidence to obtain an unbiased estimate. - [5.1] While I completely agree with the risks of ignoring UCs and that the NUC assumption is usually made for convenience rather than realism, would it not also be fair to run an experiment in an environment *without* UCs to see if there is some tax paid by this paper's technique (which seems to *assume* their presence)? It's possible that looking for confounding where none exists comes with a price that the experiments herein should advertise if present. Still, I think the authors did a good job of putting their technique through its paces within the contexts of Section 5, especially the ablation study. Technical Quality: 3 Clarity: 4 Questions for Authors: ** Trivial Points ** - Although it has its own section explaining it, two-way unmeasured confounding appears in the introduction without even a short explanation of its meaning; it would aid the clarity if it had a brief, intuitive lead before it gets referenced in the text. - [39] "However, these works require restrict mathematical assumptions that are hard to verify..." -- was "restrict" meant to be "strict" or "restricting"? - [117] "...which pluggs in the estimated latent factors..." -- typo? - [769] "A illustrative example..." -- typo - [Eqn. 1] Why is the subscript on O_{i,1} rather than O_{i,t}? - [236] "Assumptions 4 is concerned..." -- typo ** Substantive Questions ** - [102] "Our objective lies in evaluating the expected cumulative reward under a given target policy π, which depends solely on the observation and not on the latent confounders." The phrasing could be a bit improved here -- one read is that the expected cumulative reward depends solely on the observation and not the latent confounders, but your transition model in the previous paragraph makes it clear that this is not the case. You mean to say that the evaluation itself must be only a function of the observations since we do not have direct access to the UCs? - [Figure 1a] What's the significance of the edges emanating from Z_{i,t}? Just to show that OWUC vs TWUC are different ways of modeling the more general unconstrained UC space? - One part that you might want to develop a bit more: the intended application of these techniques -- is this primarily suited for artificially generated policies or those employed by human decision-makers like given in the examples? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - __Omitting observations for avoiding biases.__\ This is an excellent comment! We have also carefully read [1] that you mentioned and will include it as well as our discussions in our paper, shall it be accepted. This paper discusses whether an additional random variable $Z$ should be included in the regression equation for estimating the average causal effect (ACE) of a treatment $X$ on an outcome $Y$ in causal inference. In our current problem, $\eta^{\pi}$ represents the expected cumulative reward under a target policy $\pi$. To draw a parallel with the causal inference problem, our action $A_{i,t}$ corresponds to the treatment $X$, the expected reward $R_{i,t}$ and the next observation $O_{i,t+1}$ correspond to the outcome $Y$, and the initial observation $O_{i,1}$ corresponds to $Z$. In this case, the causal relationship among $O_{i,1}$, $A_{i,t}$, and $(R_{i,t}, O_{i,t+1})$ aligns with "Model 1" in [1], where $A_{i,t} \rightarrow (R_{i,t}, O_{i,t+1})$ and $A_{i,t} \leftarrow O_{i,1} \rightarrow R_{i,t}$. Under these circumstances, controlling for $O_{i,1}$ or including it as an independent variable in the model is beneficial because it blocks the back-door paths. Therefore, when estimating $\eta^{\pi}$, there is no need to omit the initial observation $O_{i,1}$. - __Settings without unmeasured confounders.__\ This is another excellent comment. We fully agree with the importance to investigate the tax of the proposed two-way deconfounder in settings without unmeasured confounders. Indeed, our sensitivity analysis reported in Section 5.3 compares different algorithms with different degrees of unmeasured confounding, characterized by a sensitivity parameter $\Gamma$. A large $\Gamma$ indicates a stronger effect of unmeasured confounding. When $\Gamma$ decays to zero, no unmeasured confounders exist. As shown in Figure 5: - When $\Gamma=0$ where there are no unmeasured confounders, the proposed two-way deconfounder (TWD) loses its superiority when compared to standard algorithms that ignore unmeasured confounders, such as MB and TWDIDP2. This demonstrates the prices of looking for confounding where none exists. - When $\Gamma=0.3$ where the effects unmeasured confounding are weak, TWD achieves similar performance to MB and TWDIDP2. - Finally, when $\Gamma>0.3$, the effects unmeasured confounding becomes (moderately) strong. As a result, TWD achieves the lowest LMSE and LMAE. Inspired by Reviewer 7kZ2, we have outlined a hypothesis testing procedure to test the degree of unmeasured confounding, in order to determine the best algorithm to use. This hybrid procedure is expected to enhance the robustness of the TWD performance across diverse settings. - __Minor comments.__ - We apologize for the typos and will correct them. "restrict" was meant to be "restrictive". - Shall our paper be accepted, we will add a short explanation of the proposed two-way unmeasured confounding in the introduction. - In Equation (1), the conditioning set contains variables that are not affected by policies, such as the initial observation. Inclusion of $O_{i,t}$ is problematic because these observations are influenced by the behavior policy, while Equation (1) is intended to compute the expectation under the target policy. - __Clarification of the dependence on the latent confounders.__\ You are absolutely correct. The immediate reward indeed depends on the latent confounders, but the target policy we wish to evaluate is irrelevant of these confounders. We plan to adopt the following changes to make this point clearer: - We will rephrase this sentence to emphasize that it is the target policy rather than the immediate reward that does not depend on latent confounders. - Motivated by the comments from Reviewer P4Ev, we will include visualizations of the data generating process of the offline data and that under the to further illustrate this point; see Figure 2 in the PDF file. - __Clarification on Figure 1(a).__\ Thank you for the constructive comment! The edges indicate that both OWUC and TWUC are special cases of UUC. Specifically, the edges from $\\{Z_{i,1}\\}\_{i}$ to $\\{H_{i}\\}\_{i}$ indicate that the one-way unmeasured confounders can be viewed as special cases of unconstrained unmeasured confounders that remain the same over time. Similarly, the edges from $\\{Z_{i,t}\\}\_{i,t}$ to $\\{U_{i}\\}\_{i}$ and $\\{W_{t}\\}\_{t}$ indicate that the two-way unmeasured confounders can be viewed as special cases of unconstrained unmeasured confounders that do not involve either over time or across trajectories. - __The intended application.__\ Thank you for your suggestion. Should our paper be accepted, we will further elaborate on the intended application of our method, which is primarily designed for settings involving human decision-makers. In these contexts, decision-makers often rely on critical but unrecorded information when taking actions, leading to confounded datasets. For example, in healthcare or contexts similar to our case study, doctors frequently use visual observations or patient interactions to inform treatment decisions. Such unstructured data can be challenging to quantify and are often omitted [2]. Similarly, in technology companies where behavior policies involve human interventions, the data can also become confounded [3]. - __References.__\ [1] Cinelli C, Forney A, Pearl J. A crash course in good and bad controls[J]. Sociological Methods & Research, 2022: 00491241221099552.\ [2] McDonald, C. J. (1996). Medical heuristics: the silent adjudicators of clinical practice.\ [3] Shi, C., Zhu, J., Shen, Y., Luo, S., Zhu, H., & Song, R. (2024). Off-policy confidence interval estimation with confounded markov decision process. Journal of the American Statistical Association, 119(545), 273-284. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply, to which I have only one response re: Omitting observations for avoiding biases. I appreciate you looking into the reference I provided and even citing the example of the confounding scenario with treatment $X$, confounders $Z$, and outcome $Y$. While I agree with your assessment that the observations in your work, $O_{i,1}$ are _pretreatment_ covariates (i.e., that are observed before the chosen action), I think it might be too broad to state that all such observations would match the confounding structure wherein controling for $Z$ blocks the back-door -- it is possible, as I mentioned about the M-graph example (Model 7 in Cinelli, Forney, & Pearl, 2022), to have pretreatment covariates that are correlated with both treatment and outcome but that one should _not_ control for, even if they are observed before the treatment choice. I also agree with your ideas for future study / amendments and think that they will strengthen this work, but am maintaining my current score (which I think is quite strong) because I cannot assess these future implementations yet, think the impact of this paper is appropriately scored, and agree with some of the theoretical qualms that some of the other reviewers have raised. Still, I think this paper introduces fresh ideas that the community would find interesting and found your replies to other reviewers to be thorough and convincing as well. Best of luck! --- Reply to Comment 1.1.1: Comment: Again, thank you very much for your valuable feedback and insightful comments. We are largely encouraged by your positive assessment of our paper. After carefully reviewing the paper you mentioned [1], we realize that directly conditioning on $O_{i,1}$ may have been insufficiently cautious. The issue with the M-graph that you highlighted is certainly a valid concern. We will discuss this in the paper. Meanwhile, it remains possible to employ the confounder selection algorithm to determine whether to including observations in a data-dependent manner (see e.g., Example 1 on Page 6 of [2]). However, most confounder selection algorithms only explore the bandit scenario, and further research would be required to extend this to reinforcement learning. We are truly grateful for your insightful feedback once again. [1] Cinelli C, Forney A, Pearl J. A crash course in good and bad controls[J]. Sociological Methods & Research, 2022: 00491241221099552.\ [2] Guo F R, Lundborg A R, Zhao Q. Confounder selection: Objectives and approaches[J]. arXiv preprint arXiv:2208.13871, 2022.
Summary: This paper studies the problem of confounded OPE. The authors explore a new structural assumption of the data-generating process called the two-way confounding assumption. They also propose an algorithm called the two-way deconfounder that can deal with the new setting they consider. The authors perform theoretical and empirical analyses to validate their proposed assumptions and algorithms. Strengths: * The idea of two-way confounding is interesting and well-motivated. This paper is the first to introduce the two-way confounder assumption into the field of causal RL. * The two-way deconfounder algorithm is simple and intuitive. And it can deal with challenging causal RL problems with modern functional approximation techniques. * The authors perform solid theoretical analysis and empirical analysis to evaluate their methodology. The theoretical results include a non-asymptotic error bound of the two-way deconfounder estimator. I checked the proof and found the result is reliable. In the empirical analysis, the authors test their method in both simulated and real-world RL scenarios. They also compare their method with various baselines. Weaknesses: * I suggest the author add more explanations about the problem setting to the intro section, making it more readable for readers outside the field of causal inference. * The algorithm imposes additional modeling restrictions to the MDP by parameterizing $P$ as conditional Gaussians. * (Minor) It seems that some notations may denote both a variable or a set of variables, which is a little bit confusing. Technical Quality: 3 Clarity: 3 Questions for Authors: * In the literature of model-based RL, the transition dynamic can be represented with some kind of generative model (for example, [1]). For the two-way confounder algorithm, is it possible to parameterize $P$ as some generative model and thus eliminate the conditional Gaussian restriction? * Given different confounding assumptions (unconstrained/two-way/one-way confounding), how can we determine which setting fits best with the problem if we do not have full prior knowledge of the data-generating process? Do there exist any hypothesis testing procedures or simple heuristic methods? Can the authors give a short discussion? * (Minor) What does the term "relational neural network" mean? [1] Buesing, Lars, et al. "Learning and querying fast generative models for reinforcement learning." arXiv preprint arXiv:1802.03006 (2018). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed the limitations of their paper in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - __Enhancing clarity of the problem setting in the Introduction.__\ Thank you for your valuable suggestion. We will include additional explanations about the problem setting in the introduction section of the final version. - __Clarifying confusing notations.__\ We apologize for any confusion caused by the misused notations. We will correct these issues in the final version. - __Can the transition dynamics be represented using a generative model instead of a conditional Gaussian distribution?__\ This is a good point. We did assume the transition function $\mathcal{P}$ follows a conditional Gaussian distribution in our implementation. This assumption is common employed in RL, as seen in papers such as [1] and [2]. As you mentioned, it is also possible to parameterize the transition function using a generative model, such as mixture density networks, generative adversarial networks or diffusion models. This alternative modeling approach would not affect our theoretical results, as our theory does not require the conditional Gaussianity assumption. We will add these discussions in the final paper. - __How can we determine the best-fitting setting for the problem without prior knowledge of the data-generating process?__\ We greatly appreciate this excellent comment. Our paper covers four confounding assumptions, corresponding to no unmeasured confounding, one-way unmeasured confounding, two-way unmeasured confounding and unconstrained unmeasured confounding. In practice, we may consider the following sequential testing procedure to infer the unknown confounding structure and select the most suitable confounding assumption: * **Step 1: Initial testing for the Markov property**. At the first step, we use state-of-the-art test procedures, such as [3], to determine whether the original data satisfies the Markov property. If the null hypothesis is not rejected, we do not have sufficient evidence to believe there are unmeasured confounders. Thus, we stop the procedure and conclude the data is likely does not contain unmeasured confounders. If the null hypothesis is rejected, it suggests the presence of some unmeasured confounders and we proceed to Step 2. * **Step 2: Testing for one-way unmeasured confounding**. We next assume the data contains one-way confounders, estimate them from the data, include the estimators in the observations, and perform the Markov test again using the transformed data. If the null hypothesis is not rejected, we stop the procedure and conclude that the one-way unmeasured confounding assumption is likely to hold. Otherwise, we proceed to Step 3. * **Step 3: Testing for two-way unmeasured confounding**: Finally, we impose the two-way confounding assumption, estimate these confounders from the data and apply the Markov test to the transformed data with estimated two-way confounders incorporated in the observations. If the null hypothesis is not rejected, we stop the procedure and conclude that the two-way unmeasured confounding assumption is likely to hold. Otherwise, we conclude that the unconstrained unmeasured confounding assumption is likely satisfied. We will add the related discussions shall our paper be accepted. - __Clarification on relational neural network.__\ The term "relational neural network" has the same meaning as "neural tensor network" [4] mentioned on line 173, known for its ability to capture the intricate interactions between pairs of entity vectors. We apologize for the confusion and will use "neural tensor network" consistently shall our paper be accepted. - __References.__\ [1] Yu T, Thomas G, Yu L, et al. Mopo: Model-based offline policy optimization[J]. Advances in Neural Information Processing Systems, 2020, 33: 14129-14142.\ [2] Janner M, Fu J, Zhang M, et al. When to trust your model: Model-based policy optimization[J]. Advances in neural information processing systems, 2019, 32.\ [3] Shi, C., Wan, R., Song, R., Lu, W. and Leng, L., 2020, November. Does the Markov decision process fit the data: Testing for the Markov property in sequential decision making. In International Conference on Machine Learning (pp. 8807-8817). PMLR.\ [4] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensor networks for knowledge base completion. Advances in neural information processing systems, 26, 2013. --- Rebuttal 2: Comment: I want to thank the authors for their detailed response, which addresses my major concerns. In particular, the authors present a well-rounded discussion on how to determine the best-fitting setting for the problem without knowing the data-generating process. I have also read the authors' responses to other reviews. From my viewpoint, these responses are convincing and provide interesting ideas beyond the paper. In conclusion, I think this paper is a solid work and will keep my positive rating. --- Rebuttal Comment 2.1: Comment: We sincerely appreciate your valuable feedback, which has greatly contributed to our understanding and will guide our future work. Thank you for your time and effort in reviewing our paper!
Summary: The authors study off-policy evaluation for longitudinal data with hidden confounders, which are accounted for by assuming they are either time-invariant or state-invariant but not both. Strengths: * The idea is relatively original, important, and the contribution might be significant. Presentation is somewhat clear. * The section studying real-world MIMIC III data shows that this method might be practically relevant. Weaknesses: * Despite the term "deconfounder" being featured prominently, the authors do not engage with much of the literature on the deconfounder algorithm initially proposed by Wang and Blei (2019). Not much is said beyond the cryptic statement, "the validity of the deconfounder algorithm relies crucially on the consistent estimation of the latent factors." This method is also inspired by two-way fixed effects regression, but since they are drawing connections to the deconfounder algorithm, the authors should discuss its fundamental assumptions in the context of its limitations and criticisms by D'Amour, Ogburn et al., and others. * The assumption of two-way unmeasured confounding is illuminated with concrete examples starting at line 127, but the Propositions 1-3 that follow are difficult to understand without explicit context. Looking at Appendix A, it seems that additional assumptions need to be made for Proposition 2 or Proposition 3 to be true. Minor comments * Typo on line 117: "pluggs" * Figure 5 is too small to read. Technical Quality: 3 Clarity: 2 Questions for Authors: In Figure 4, why are TWD's estimated values so much more negative? And why is the variance higher? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors should discuss limitations in the main text rather than the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - __Engagement with the deconfounder literature, limitations and criticisms of [1]__\ This is an excellent comment. A detailed discussion of the criticisms of the deconfounder algorithm also helps better motivate our algorithm. We plan to include the following discussion after our cryptic statement you mentioned, to more clearly elaborate on the limitations of the deconfounder: To ensure the latent factors can be estimated precisely, [1] imposed the **Consistency of Substitute Confounders** assumption, requiring to estimate the unmeasured confounders $Z_i$ from the causes $A_i$ with certainty. However, this assumption indeed **invalidates** the derivations of Theorems 6-8 of [1], resulting in the inconsistency of the algorithm. Specifically, as commented by [2], if the event $A=a$ provides a perfect measurement of $Z$ such that there is some function $\hat{z}(A)$ such that $\hat{z}(a)=Z$, then the overlap condition fails. As a result, the ATE cannot be consistently identified. [3] expressed similar concerns towards the algorithm. Unlike [1], our algorithm does not require such an assumption. Under the RL setting, the proposed two-way unmeasured confounding assumption effectively limits the number of unmeasured confounders to $O(N)+O(T)$, which facilitates their consistent estimation when both the number of trajectories $N$ and the number of decision points per trajectory $T$ grow to infinity, avoiding the need for the unmeasured confounders to be deterministic functions of the actions. - __Clarifications of Propositions 1-3__\ Let us first clarify these propositions: - Proposition 1 aims to prove that when **the true model satisfies the unconstrained unmeasured confounding (UUC) assumption**, we cannot obtain consistent estimation even if **we work with the correct model assumption (i.e., UUC)**. - Proposition 2 aims to demonstrate that if **the true model satisfies two-way unmeasured confounding (TWUC)** and **we erroneously assume the model satisfies the one-way unmeasured confounding (OWUC) assumption**, then our estimators would fail to be consistent. It reveals the limitations of OUC in the presence of time-varying unmeasured confounders, thus highlighting the necessity of the proposed TWUC. - Proposition 3 aims to show that when **the true model satisfies TWUC and we correctly specify two-way model**, we can obtain consistent estimation. Meanwhile, shall our paper be accepted, we will explicitly list these underlying true model assumptions and imposed model assumptions highlighted in bold, to make these propositions clearer. Finally, there are two additional assumptions in the proof of Proposition 2: - The first assumes the oracle $\zeta$ is known; - The second assumes all $W_t$s are i.i.d. Our intention was to make the main text concise and the presentation easy to follow, so we did not introduce them in the statement of Proposition 2. The first condition is not necessary to impose, as an unknown $\zeta$ would enlarge the estimator's MSE under OWUC. Shall our paper be accepted, we will remove the first condition and explicitly state the second condition in Proposition 2. We hope the aforementioned changes clarify these propositions. - __Clarification on the experimental results of Figure 4.__ - __The issue of higher variance:__ This is another excellent comment. In response, upon checking the code of a related paper [4], we discovered that our procedure omitted a crucial step—data normalization. Given the large state dimension and significant variance among the values within each dimension in the real data, this omission likely contributed to the higher variance observed in TWD's estimated values. During this rebuttal period, we have incorporated the normalization step and re-analyzed the dataset. The updated results, illustrated in Figure 3(a) of the PDF file, demonstrate that the issue with high variance has been addressed effectively, and TWD still performs well. We will modify this figure shall our paper be accepted. - __The issue of negative values:__ The negative values arise due to the design of the reward function. In our submitted manuscript, we set the reward to $R_{i,t}=SOFA_{i,t}-SOFA_{i,t+1}$. This choice has been previously used by [5], who also reported similar negative values. During this rebuttual, we have considered two new reward functions $R_{i,t}=-SOFA_{i,t+1}$ and $R_{i,t}=23-SOFA_{i,t+1}$, which were used in [4], and conducted additional experiments under these designs. As shown in Figure 3 of the PDF file, the estimated values of TWD vary from positive to zero to negative, confirming that the appearance of negative values stems from the choice of the reward functions. We will add these discussion shall our paper be accepted. - __Discussion of limitations, typos and Figure 5.__\ We will correct the typos and move the limitations of our algorithm to the main text. Figure 5 will be enlarged shall our paper be accepted. - __References.__\ [1] Wang Y, Blei D M. The blessings of multiple causes[J]. Journal of the American Statistical Association, 2019, 114(528): 1574-1596.\ [2] D’Amour A. Comment: Reflections on the deconfounder[J]. Journal of the American Statistical Association, 2019, 114(528): 1597-1601.\ [3] Ogburn E L, Shpitser I, Tchetgen E J T. Comment on “blessings of multiple causes”[J]. Journal of the American Statistical Association, 2019, 114(528): 1611-1615.\ [4] Yunzhe Zhou, Zhengling Qi, Chengchun Shi, and Lexin Li. Optimizing pessimism in dynamic treatment regimes: A bayesian learning approach. In International Conference on Artificial Intelligence and Statistics, pages 6704–6721. PMLR, 2023.\ [5] Aniruddh Raghu, Matthieu Komorowski, Imran Ahmed, Leo Celi, Peter Szolovits, and Marzyeh Ghassemi. Deep reinforcement learning for sepsis treatment. arXiv preprint arXiv:1711.09602, 2017. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I am glad that you noticed and corrected the underlying issue with Figure 4. I am raising my score accordingly. --- Reply to Comment 1.1.1: Comment: We are sincerely grateful for your thoughtful feedback and for considering our response. We deeply appreciate the time and effort you dedicated to reviewing our work.
Rebuttal 1: Rebuttal: We thank all referees for their valuable and insightful comments. We have addressed all your comments and will incorporate them shall our paper be accepted; please refer to our detailed responses to your review. Below, we would like to briefly clarify the notation $\mathbb{E}^{\pi}$ and our data generating process, in response to the comments raised by Reviewers GM6x and P4Ev. The notation $\mathbb{E}^{\pi}[R_{i,t}]$ represents the expectation under the **interventional distribution** of $R_{i,t}$, assuming all actions are generated solely by the target policy $\pi$, irrespective of the unmeasured confounders. In other words, whatever relationship exists between the unmeasured confounders and the actions in the offline data, that relationship is **no longer in effect** when we perform the intervention according to $\pi$. More specifically, the **interventional distribution** of $R_{i,t}$ can be described as follows: - The initial observation is generated according to $\rho_0$; - At each time $t$, the action $A_{i,t}$ is determined by the target policy $\pi(\bullet|O_{i,t})$, independent of the unmeasured confounder $Z_{i,t}$; - The immediate reward $R_{i,t}$ and next observation $O_{i,t+1}$ are generated according to the transition function $\mathcal{P}(\bullet|A_{i,t},O_{i,t},Z_{i,t})$. In this setup, the unmeasured confounders affect only the reward and next observation distributions, but not the action. This differs from the offline data generating process; see Figure 2 in the PDF file for an illustration. Alternative to $\mathbb{E}^{\pi}$, we can adopt Pearl's do-operator to further clarify the expectation, shall our paper be accepted. Pdf: /pdf/b3d64cf7149a8f6ba02d085012bed06a24f6bc97.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Posture-Informed Muscular Force Learning for Robust Hand Pressure Estimation
Accept (poster)
Summary: The authors leverage 3D hand pose and sEMG signals as inputs to facilitate hand pressure estimation. With data gloves, a multimodal hand-object interaction collection system is devised. Empirical experiment results show the efficacy of the proposed method. Furthermore, the feasibility and robustness is proven in a single-camera scenario. Strengths: - The collected dataset is superior in scale and provides a holistic collection for various hand-object interactions. - The ablation studies are helpful in providing useful insights. Weaknesses: - A highly related dataset ''ActionSense: A Multimodal Dataset and Recording Framework for Human Activities Using Wearable Sensors in a Kitchen Environment'' in NeurIPS 2022 should be cited and discussed. - The evaluation is only conducted in a drop-one-session manner. It might be better if a drop-one-subject evaluation could be provided to demonstrate the cross-subject ability for a more general application. - The results are impressive in their low relative errors. It would help more with the absolute errors reported. - The results are reported with little temporal information. A curve showing the predicted force changing with time would be preferable. Also, quantitative metrics on the temporal consistency would be a bonus if possible. - For glove-less evaluation, only qualitative results are reported. It appears possible to provide a quantitative comparison given the controlled hand postures. For example, attaching sensors to the contact region of the MR-Press makes it possible to provide GT in glove-less scenarios. - Reporting quantitative prediction errors for different postures is a missing chance to provide more insights. Technical Quality: 3 Clarity: 3 Questions for Authors: - Are the readings from 6 sensors belonging to Region 8 simply added up as the pressure of Region 8? - Is it possible to briefly explain how the quality would be influenced by muscle fatigue as mentioned in L158? - Will the data glove introduce different interaction patterns from those without data gloves? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to the Weaknesses and Questions part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **(W1) Additional dataset reference** As you suggest, we will cite and discuss in related work section as follows: A highly relevant work is the ActionSense dataset, which focuses on capturing multimodal data of human activities in a kitchen environment using wearable sensors. Similar to our approach, ActionSense emphasizes the importance of rich, synchronized, multi-modal data streams for understanding human actions and gathers . However, while ActionSense provides a valuable resource for understanding general kitchen activities, our work focuses on utilization of 3D hand posture into muscular force learning for understanding hand pressure estimation. **(W2) Additional cross-user evaluation** Please refer (Joint Response 2). **(W3) Additional results regarding absolute errors** We have included the Mean Absolute Error (MAE) results in the rebuttal PDF file. Table R6 and Table R7 in the rebuttal PDF file correspond to Table 2 and Table 3 in the main manuscript, respectively, and present the MAE metrics for each comparison. **(W4) Providing temporal information for predicted forces** - **“A curve showing the predicted force changing with time would be preferable.”**: We acknowledge the reviewer's insightful suggestion regarding the need for temporal information in our results. To illustrate the performance of our model over time, we will include figures showing the temporal evolution of both ground truth and predicted pressure values. Figure R1 in the rebuttal PDF file provides an example of such visualization. The figure presents the ground truth pressure and the pressure predicted by our model for all 9 hand regions during TM-Press and Medium Wrap actions. The horizontal axis represents time, each row corresponds to a specific hand region, and the vertical axis in each plot shows the pressure values. In this example, the user performs the TM-Press action for 10 seconds, followed by the Medium Wrap action for another 10 seconds. As evident from the figure, the location of pressure exertion and prediction shifts according to the action performed. We believe that these temporal visualizations will enhance the readers' understanding of our framework by clearly demonstrating how data is collected and predictions are made over time. We will include these figures and the corresponding discussion in a new Section D.4 titled "Comparison between Predicted Pressures and Ground Truth along with time" in the revised manuscript. Moreover, we’ll add one of those figures in the main body of the paper as in (Line 299) of Section 5.2.1. - **“Quantitative metrics on temporal consistency would be a bonus if possible.”**: We would appreciate it if the reviewer could provide specific examples or references for "quantitative metrics on temporal consistency". This would allow us to better understand their suggestion and provide a relevant analysis. **(W5) Additional experiments and results for glove-less evaluation** Please refer (Joint Response 1). **(W6) quantitative prediction errors for different postures** You correctly pointed out that our study does not demonstrate the performance of our framework on unseen gestures during the training phase. We acknowledge this limitation. While constructing our dataset, we strived to collect data encompassing a wide range of hand interactions, including plane interactions, pinch interactions, and grasp interactions, to facilitate comparisons with previous works [R1,R3]. However, we also had to consider the limited time and potential fatigue of our participants. Therefore, we made a selection of representative postures, as shown in Figure 8 and Figure 9. Collecting additional data for novel gestures with new participants is infeasible in this rebuttal phase, preventing us from conducting further experiments on generalization. However, as highlighted in Lines 908 and 152, our selection of postures covers a considerable portion of representative movements from the reference grasp taxonomy and those presented in [R3]. Further research with new gestures and objects is certainly needed to solidify these points. Following this point, we will add this text into the limitation section: - While our dataset covers a range of representative hand postures, it is limited in its exploration of novel gestures and object interactions unseen during training. Further research is necessary to investigate the generalizability of our framework to new objects and gestures. Expanding the dataset with a broader range of hand-object interaction scenarios and evaluating the model's performance on these unseen examples would provide valuable insights into its robustness and adaptability. **(Q3)Will the data glove introduce different interaction patterns from those without data gloves?** As outlined in Section 4.4 of the main text, we describe how we canonicalize 3D hand pose data derived from a vision-based hand pose estimator in Section B.4.2, 3D Hand Pose. During inference, we align the 3D hand pose estimated from visual data with the training data representation. This process involves rescaling the hand pose to match the bone lengths used during training, rotating it to align with the kinematic tree of the MANO hand model, and translating it to ensure consistent root joint positioning. This canonicalization minimizes the discrepancy between hand pose data from the glove and the vision-based estimator, improving the generalizability of our model during inference. Therefore, if the inference of hand pose is good enough, there will be no difference between wearing gloves and not. As noted in the Section E. Limitations and Future Works in the appendix, our framework inherently relies on the accuracy of off-the-shelf hand pose detectors during inference (Line 1097). This dependency is acknowledged as a limitation of the proposed modality and inference pipeline. Due to limitations on the length of our rebuttal, we would like to address (Q1) and (Q2) in seperated comment. --- Rebuttal 2: Title: Additional comment for Q1 and Q2 Comment: **(Q1) Are the readings from 6 sensors belonging to Region 8 simply added up as the pressure of Region 8?** Instead of summing the values from the 6 sensors within Region 8, we define the maximum value among those sensors as the pressure for that region. Consequently, individuals with smaller hands might not activate all 6 pressure sensors in Region 8 during certain grasp postures. Therefore, we opted for the maximum value among the 6 sensors rather than the sum. Similarly, we chose the maximum over the average for the same reason. When only a subset of the 6 sensors are activated due to hand size discrepancies, using the average would result in an artificially low pressure reading due to the inactive sensors. **(Q2) Is it possible to briefly explain how the quality would be influenced by muscle fatigue as mentioned in L158?** Muscle fatigue is the decline in a muscle's ability to generate force. It happens when a muscle is repeatedly contracted or held in a sustained contraction for an extended period. Imagine holding a heavy book at arm's length – after a while, your arm will start to shake, and it will become increasingly difficult to keep the book up. This is muscle fatigue. The connection between muscle fatigue and EMG-based control is that muscle fatigue can negatively impact the quality of EMG data used for hand pressure estimation. As muscles tire, their electrical activity changes. This can manifest as increased signal amplitude (RMS) and shifted frequency content[R4]. These changes make it harder for the model to accurately distinguish between different hand postures and pressure levels. Moreover, fatigue leads to less consistent muscle activation patterns, making it challenging for the model to learn reliable relationships between EMG signals and exerted pressure[R5]. While muscle fatigue is recognized as an important topic in EMG-based control, it falls outside the scope of this study. To minimize its effects on our results, we ensured sufficient rest periods between data collection trials for both training and testing. To convey this consideration more explicitly, we will include a statement regarding this after Line 930 in Section B.3 Data Collection Protocol. --- Rebuttal Comment 2.1: Comment: Thanks for the thorough responses. Most of my concerns are addressed. --- Reply to Comment 2.1.1: Comment: We again express our sincere gratitude to the reviewer for their insightful comments and constructive feedback. We appreciate the time and effort invested in carefully reviewing our work. The reviewer's suggestions have significantly contributed to enhancing the clarity, depth, and overall quality of our manuscript. We have diligently addressed each concern raised, and we believe the revisions made have strengthened our paper considerably. We are particularly grateful for the valuable guidance on presenting temporal information. We believe that the revised manuscript, incorporating the reviewer's feedback, offers a stronger contribution to the field and we are confident that it will be well-received by the NeurIPS community.
Summary: This manuscript introduces a framework for estimating the pressure of 9 points on the hand during 22 distinct hand object interaction types including pinches, grasps and planar interactions. They use an 8 channel sEMG band placed at the forearm as well as hand pose estimated using a glove or monocular video camera. They train models using a dataset from 20 participant that uses pressure sensors in the glove as ground truth. They evaluate models within participants, and compare pressure estimation models that use different combinations of sEMG and hand pose, including different representations of hand pose using joint angles or embedded 3D coordinates. The best performance is achieved by a multimodal network that fuses an encoding of the spectrogram of the sEMG and the 3D hand pose, the later using a 3D convolutional network. They compare performance across fingers and gestures, and give qualitative comparisons to other vision-based approaches. Strengths: - The combination of different modalities integrated in the baseline is I believe new, and the method is evaluated across a sufficient number of participants (20) and gestures. - They compare across relevant single-modality baselines, and break out evaluations broken out per-behavior. - The framework is clearly defined and contextualized. I think the 3D pose embedding and joint angle comparison is nice. Weaknesses: - All evaluation is within-participant. sEMG is known to be quite variable across participants due to variations in anatomy and placement, and the hierarchy of performance across modalities will likely change. - The comparisons with PressureVision++ (Figure 4) and other SOTA approaches are a little underwhelming, as they essentially show the framework only functions when all fingertips are tracked. A more informative set of comparisons would show results, with ground truth, for cases when the fingertips are unoccluded. For figure 16, the quantitative comparison betwene the two would be informative. - The behavioral generalization to new objects and gestures is unclear. Similarly the performance with raw vision data is only qualitative. It is unclear how the system handles domain mismatches between the glove and vision-based hand pose estimation framework. In combination with the lack of cross-participant information the in the wild utility of the approach is unclear. Technical Quality: 3 Clarity: 3 Questions for Authors: ** Questions ** - what is the cross participant performance? - What is the quantitative performance compared to Pressure Vision? What is the correlation between the two techniques if an absolute ground truth (e.g. from sensel) is not possible. - Why does utilizing hand angles not improve the model performance? - How was 3D hand pose acquired in cases without the glove? I don't see a description of this method. - Can you show raw traces of the discretized pressure to give an intuition for what the real pressure data looks like? **Minor** - **Datasets for Hand Pressure Estimation** — do these datasets include pressure, many of these appear to be object interaction papers - Fig 4: can you clarify if this participant was in the training dataset? - L302 ‘significant’ - were significance tests run here? - Table 3 - what are the relevant error bars for this table? - nit: L 336 - pionering ? - Table 2:unclear what errors are over - Table 3: unclear what value is being plotted - all exocentric? - nit: Figure 3 pixelated - Manuscript overstates at multiple points ‘L316 ‘exceptional’ Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, they review this in section F. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **(W1) Additional experiments for cross-user evaluation** Please refer (Joint Response 2). **(W2-1) Clarification on the comparison with PressureVision++** You are correct that the current comparison might not be entirely fair, and we acknowledge that our framework currently performs best when all fingertips are tracked. However, we'd like to clarify our approach in two ways. First, this limitation is inherent to the PressureVision and PressureVision++ methods themselves. As you pointed out, PressureVision++ relies on visual cues like color and shape changes on the hand caused by pressure. Consequently, it can only estimate hand pressure when all relevant visual features are visible in the camera image. In contrast, our approach doesn't require full hand visibility; as long as sufficient hand posture information can be inferred, pressure estimation is possible. Second, while we aim for comprehensive comparisons, PressureVision and PressureVision++ represent the only viable options available in current research for vision-based hand pressure estimation. While not a perfect comparison, we believe it offers valuable insights given the limited alternatives. **(W2-2) “The quantitative comparison between the two would be informative.”** Please refer (Joint Response 1). **(W3-1) Limitation on behavioral generalization to new objects and gestures** You correctly pointed out that our study does not demonstrate the performance of our framework on unseen gestures during the training phase. We acknowledge this limitation. While constructing our dataset, we strived to collect data encompassing a wide range of hand interactions, including plane interactions, pinch interactions, and grasp interactions, to facilitate comparisons with previous works [R1,R3]. However, we also had to consider the limited time and potential fatigue of our participants. Therefore, we made a selection of representative postures, as shown in Figure 8 and Figure 9. Collecting additional data for novel gestures with new participants is infeasible in this rebuttal phase, preventing us from conducting further experiments on generalization. However, as highlighted in Lines 908 and 152, our selection of postures covers a considerable portion of representative movements from the reference grasp taxonomy and those presented in [R3]. This suggests a degree of generalizability, but further research with new gestures and objects is certainly needed to solidify these points. Following this point, we will add this text to the Limitation section (Line 1107): - While our dataset covers a range of representative hand postures, it is limited in its exploration of novel gestures and object interactions unseen during training. Further research is necessary to investigate the generalizability of our framework to new objects and gestures. Expanding the dataset with a broader range of hand-object interaction scenarios and evaluating the model's performance on these unseen examples would provide valuable insights into its robustness and adaptability. **(W3-2) Clarification on matching the glove and vision-based hand pose estimation framework** You are correct, and we don't claim this to be a perfectly fair comparison. However, we'd like to provide some context on our rationale in two points. First, the authors of PressureVision++ specifically highlight its effectiveness in interacting with diverse surfaces (3rd, 4th, and 6th paragraphs of Introduction [R1]), even presenting training and data collection methodologies designed for such scenarios. This led us to believe that PressureVision++ would be a suitable candidate for comparison with our vision-based method. Second, when considering vision-based methods for comparison, we found PressureVision and PressureVision++ to be the only viable options available in existing research. While it may not meet your standards for a completely fair comparison, we believe we made the best possible choice given the available options. **(Q2) Why does utilizing hand angles not improve the model performance?** Based on the current results of our ablation study, we believe that hand angles alone lack sufficient inductive bias to effectively capture the spatial information inherent in hand postures. **(Q3) How was 3D hand pose acquired in cases without the glove?** We specify the utilization of an off-the-shelf hand pose detector in Section 5's introduction and Section 4.4, explicitly stating that we chose to employ ACR (Line 244). Additionally, as mentioned earlier, Section B.4.2, 3D Hand Pose, describes the canonicalization process for transforming the results of vision-based pose estimation into a standardized 3D hand pose representation (Lines 943-975). **(Q4) Can you show raw traces of the discretized pressure to give an intuition for what the real pressure data looks like?** To visually illustrate the performance of our model over time, we will include figures showing the temporal evolution of both ground truth and predicted pressure values. Figure R1 file provides an example of such visualization. The figure presents the GT pressure and the pressure predicted by our model for all 9 hand regions during TM-Press and Medium Wrap actions. The horizontal axis represents time, each row corresponds to a specific hand region, and the vertical axis in each plot shows the pressure values. In this example, the user performs the TM-Press for 10 seconds, followed by the Medium Wrap for another 10 seconds. We believe that these temporal visualizations will enhance the readers' understanding of our framework by demonstrating how data is collected and predictions are made over time. We will include these figures and the corresponding discussion in a new Section D.4. Moreover, we’ll add one of those figures in the main body of the paper as in (Line 299) of Section 5.2.1. Due to limitations on the length of our rebuttal, we would like to address the minor points in separated comment. --- Rebuttal 2: Title: Additional comment for minor points Comment: **(M1) Datasets for Hand Pressure Estimation — do these datasets include pressure, many of these appear to be object interaction papers** You correctly pointed out that the line between object interaction papers and hand contact/pressure papers can be blurry, and Table 1 and Section 3.1, Vision-based Hand Pressure Estimation, reflect this. While some datasets focus solely on hand contact, others include pressure information, and some delve into object interaction to varying degrees. Rather than attempting a strict categorization, which can be challenging due to the inherent interconnectedness of these areas, we chose to provide a more comprehensive overview in this section. This approach allows for a broader discussion of related work and a better understanding of the research landscape. **(M2) Fig 4: can you clarify if this participant was in the training dataset?** Yes. Of course, this testing session on display does not overlap with the training session. **(M3) L302 ‘significant’ - were significance tests run here?** No. Thanks to Reviewer YYDP, we understand that the term ‘significant’ was understood differently for readers. We’ll remove that expression to avoid misunderstanding **(M3) Table 3 - what are the relevant error bars for this table?** Table R5 presents the same data as Table 3 in the main manuscript, but includes the standard deviation for each reported value. We will update Table 3 in the revised manuscript to include these standard deviations, providing a more complete picture of the variability in our results. **(M4) Table 2:unclear what errors are over** As guided in the Experiments section (Line 248), we present the evaluation metrics in Appendix D.2. To assess our model's performance, we utilize three metrics: Coefficient of Determination (R²) which measures the proportion of variance in the actual pressure explained by our model, Normalized Root Mean Squared Error (NRMSE) which quantifies the precision of our model's pressure predictions, and Classification Accuracy, assessing the model's ability to correctly identify whether pressure is being applied to each region of the hand. We will add this summary to Line 248 of the main text. **(M5) Table 3: unclear what value is being plotted** The caption for Table 3 indicates that metric is NRMSE, but if there is still any ambiguity, please let us know again. **(M6) all exocentric?** Yes. In this research, we use an exocentric camera for camera modality. However, we believe it can be extended to usage of egocentric cameras if the hand pose estimation model is working well. **(M7) nit: Figure 3 pixelated** It’s a mistake. We’ll fix it. **(M8) Manuscript overstates at multiple points L316 "exceptional"** We’ll remove that expression to avoid overstatement. “Specific actions such as I-Press, M-Press, and R-Press exhibit high accuracy and low NRMSE,showing the model’s superior performance in simpler press interactions.” --- Rebuttal Comment 2.1: Title: Thank you for the detailed response Comment: Thank you for the detailed point by point response, which addresses my original comments. I need more time to reconsider my original score but hopefully this will move it in a positive direction. One follow up on the Pressure Vision comparisons. These are great. I am wondering why the R2 is so bad. Is it predicting the pressure at the wrong time, is the pressure variable, or is it simply miscalibrated in intensity? --- Reply to Comment 2.1.1: Comment: Thank you for your thorough review and valuable contributions to our work. Your insights are greatly appreciated. Regarding the PressureVision++ quantitative comparisons, we found your comment about the low scores intriguing. To prevent miscalibration, we took steps to ensure that the units (psi, which is the unit outputted by the Sensel Morph pressure sensing array) and the range of values predicted by PressureVision++ were converted into Newton units, as is done in our paper, and calibrated for two different force ranges and units, enabling pressure estimation up to 20 N, as described in "B.4.1 sEMG Signal and Pressure Data" (Line 937 of our manuscript). This process helped mitigate potential miscalibration concerns, allowing for a fair comparison between the two systems. Moreover, we performed time synchronization between the camera and the Sensel Morph pressure sensing array. We verified the proper synchronization through data collection and preprocessing, ensuring no issues in this regard. However, it seems that PressureVision++ excels at detecting contact or the presence of force but faces challenges in accurately estimating force magnitude. This is consistent with the trends observed in PressureVision++ [R1]. For clarity, Table 2 (Performance compared to a PressureVision baseline) in [R1] primarily focuses on quantitative performance metrics of PressureVision++. Below is an excerpt from Table 2 in [R1]: | Method | Contact Acc. | Contact IoU | Volumetric IoU | |----------------------|--------------|-------------|----------------| | PressureVision [R2] | 72.7% | 15.2% | 11.3% | | PressureVision++ [R1] | **89.3%** | **41.9%** | **27.5%** | The primary performance metrics defined in [R1] are (1) Contact Accuracy, (2) Contact IoU, and (3) Volumetric IoU. Contact Accuracy and Contact IoU assess the correct identification of contact or the presence of force, corresponding to our "accuracy" metric. Conversely, Volumetric IoU reflects the prediction accuracy of pressure magnitude (section "5. Evaluation" of [R1]), corresponding to our R² or NRMSE, which represent regression task performance. While PressureVision++ demonstrates high performance in Contact Accuracy (measuring whether force presence is correctly identified regardless of location) and Contact IoU (IoU between the ground truth and predicted contact image), achieving 89.3% and 41.9%, respectively, Volumetric IoU only reaches 27.5%. This suggests that PressureVision++ excels in classification tasks, accurately identifying the presence or absence of force, but faces challenges in regression tasks, specifically estimating the precise force magnitude. This observation aligns with the trend we observed in our Table R2. In Table R2, PressureVision++ shows comparable accuracy (measuring whether force presence is correctly identified for all five fingers) to the Force-aware Interface (67.90% versus 66.00%). However, PressureVision++ exhibits relatively low values in R-square (40.30%) and NRMSE (32.95%), indicating less accurate performance in estimating force regression. Based on these results, PressureVision++ demonstrates strength in detecting contact or the presence of force but struggles to accurately estimate force magnitude. In comparison, the sEMG only model, while exhibiting similar performance in accuracy to PressureVision++, seems to have a comparative advantage in accurately estimating the precise force magnitude. Furthermore, to ensure transparency and reproducibility of our evaluation of PressureVision++, we will publicly release all data and code used for quantitative assessment and performance calculation of comparative methodologies. This information will be included in a new section, "Cross-User Evaluation," in Section D (Line 1046-1047) of the revised manuscript.
Summary: This paper proposes a hand pressure estimation method based on 3D hand poses and t forearm surface electromyography (sEMG) signals. Accordingly, the paper constructs a multimodal dataset containing pressure, 3D hand poses, and sEMG signals. The paper experimentally verifies that combining 3D hand poses and sEMG signals achieves better results compared to using these two modalities independently. Furthermore, compared to vision-based pressure estimation methods, the method achieves better results in complex hand-object interaction scenarios. Strengths: 1) The paper is well written and easy to follow. 2) The idea of integrating 3D hand poses and electromyography signals for hand pressure estimation in this paper is intuitive, interesting, and practical. The experiments in this paper further demonstrate that combining hand poses and electromyography signals is beneficial. 3) The data collection system constructed in this paper is impressive, as collecting high-quality synchronized multimodal hand data is very challenging. This dataset could potentially advance the field of hand pressure perception. Weaknesses: 1) From the perspective of multimodal fusion methods, the framework proposed in this paper lacks novelty. This framework directly combines the global features of the electromyography modality and the global features of hand poses without fully utilizing the fine-grained spatial information and prior structural information of the hand poses. 2) The qualitative comparison with PressureVision++ is unfair. Due to biases in the training dataset and the design preferences of the method, PressureVision++ is more suitable for pressure estimation in planar contact scenarios. However, this paper does not compare with PressureVision++ in such scenarios. 3) The training and testing setups are not sufficiently reasonable. During training, hand poses captured by a data glove are used as input, while during testing, the estimated results from a vision-based hand pose estimation model are used as input. There is a significant gap in the distribution of hand poses, which may potentially impair the performance of the hand pose branch. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Would it be beneficial for the overall method's performance to use predicted hand poses as input during training? 2. Have you tried other multimodal information fusion methods? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['Ethics review needed: Research involving human subjects', 'Ethics review needed: Data privacy, copyright, and consent'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **(W1) Clarifications on the novelty of the proposed framework** - We acknowledge that there might be room for improvement in terms of methodological novelty. In this paper, however, our primary objective of this paper was to pioneer the use of electromyography and 3D hand posture data simultaneously and demonstrate its efficacy. To achieve this, we presented a data collection platform and a pipeline for using our model in inference without a data glove. Subsequently, to validate the approach experimentally, we deliberately opted for the simplest form of machine learning modeling approach instead of focusing on introducing a new model architecture by incorporating geometric structures of the model and data, our work lays a strong foundation for future research in this direction. We will add this point in the Limitations and Future Work section (Line 1107) as follows: - “While our current framework demonstrates the value of combining sEMG and 3D hand pose data, it primarily focuses on leveraging global features from both modalities. Future work could explore incorporating fine-grained spatial information and prior structural knowledge of hand poses into the model architecture to further enhance pressure estimation accuracy and robustness.” **(W2) Clarification on the comparison with PressureVision++** - We agree that this is not a perfectly fair comparison. Regarding this, we would like to clarify the rationale for our comparison and explain the additional quantitative comparison with PressureVision++ we carried out during our rebuttal. Unlike PressureVision, the authors of PressureVision++ specifically highlighted its effectiveness in interacting with diverse surfaces (3rd, 4th, and 6th paragraphs of Introduction in [R1]), even presenting training and data collection methodologies designed for such scenarios. This led us to believe that PressureVision++ would be a suitable candidate for comparison with our vision-based method, which is why the primary reason we chose PressureVision++[R1] over PressureVision[R2]. When considering vision-based methods for comparison, we found PressureVision and PressureVision++ to be the only viable options available in existing research. While it may not meet your standards for a completely fair comparison, we believe we made the best possible choice, given the available options. We carried out a quantitative comparison with PressureVision++ to support our claim with a fairer condition. **(W3) Clarification on utilizing vision-based hand pose estimation model for testing setup** - As outlined in Section 4.4 (Line 228), we canonicalize 3D hand pose data derived from a vision-based hand pose estimator (More details shown in Section B.4.2, Line 942). During inference, we align the 3D hand pose estimated from visual data with the training data representation. This process involves rescaling the hand pose to match the bone lengths used during training, rotating it to align with the kinematic tree of the MANO hand model, and translating it to ensure consistent root joint positioning. This canonicalization minimizes the discrepancy between hand pose data from the glove and the vision-based estimator, improving the generalizability of our model during inference. Also our framework inherently relies on the accuracy of off-the-shelf hand pose detectors during inference (Line 1097). We acknowledge that this dependency is a limitation of the proposed modality and inference pipeline and discuss this limitation in Section E. Limitations and Future Works (Line 1077). **(Q1) Benefits of utilizing predicted hand poses as input for training** - While using predicted hand poses during training might seem beneficial, our training dataset requires participants to wear a glove to collect the ground truth hand pressure data. We utilize a pressure-sensing glove (TactileGlove, Pressure Profile Systems) for this purpose. Unfortunately, most current hand pose detectors are not trained to work on hands wearing gloves, or their performance significantly degrades in such situations. Therefore, it is hard to incorporate predicted hand poses during the training phase due to the need for more reliable hand pose estimation for gloved hands. **(Q2) Have you tried other multimodal information fusion methods?** - We haven't explored alternative multimodal information fusion methods in this work. However, we acknowledge that there are numerous potential improvements to be made in both the model design and implementation. Investigating different fusion techniques is a promising direction for future work, and we believe it holds the potential to enhance the accuracy and robustness of our hand pressure estimation framework. We will add this point into limitation and future works section as follows: “While our current framework demonstrates the value of combining sEMG and 3D hand pose data, it primarily focuses on leveraging global features from both modalities. Future work could explore incorporating fine-grained spatial information and prior structural knowledge of hand poses into the model architecture to further enhance pressure estimation accuracy and robustness.” --- Rebuttal Comment 1.1: Comment: I have carefully read the author's meticulous rebuttal and the reviews of other reviewers. I thank the authors for providing a detailed rebuttal. First, my concerns about comparison with PressureVision++ have been addressed. Although the authors state that their main contribution is not the multimodal fusion method, I still think it is important. After all, it is “foreseeable ” that fusing more modes can improve the accuracy of estimation. Overall, considering the pioneering nature and adequacy of the experiments, I maintain my initial score, which is borderline accept, but there are still some weaknesses in the depth and innovation of the fusion framework. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate you taking the time to thoroughly review our rebuttal and acknowledge the clarification regarding our comparison with PressureVision++. We understand your perspective on the importance of multimodal fusion methods and agree that there is a clear potential for improving our approach in that aspect. We are grateful for your constructive feedback and insightful suggestions, particularly regarding the exploration of fine-grained spatial information and prior structural knowledge of hand poses. Thank you again for your valuable contribution to our work.
Summary: This paper presents a novel framework for estimating hand pressure during various hand-object interactions using multimodal data. The framework integrates sEMG and 3D hand posture information to enhance the accuracy of pressure estimation. They introduce a dataset to validate their approach. The primary contribution of the study is the exploration of combining vision-driven 3D hand posture data with sEMG to improve the robustness of hand pressure estimation. Strengths: 1. The integration of sEMG signals with 3D hand posture data sound good. 2. The dataset is extensive, including 83.2 million frames from various hand-object interactions. 3. The proposed model shows high accuracy in estimating hand pressure. Weaknesses: 1. Although integrating sEMG signals with 3D hand posture data is a promising approach, there is insufficient experimental evidence to support the motivation behind this combination. The paper lacks clear proof of the benefits described, beside the sentence. 2. The paper does not adequately compare their approach with other vision-based datasets or techniques that also extract 3D hand posture information. 3. While combining sEMG and vision-based methods theoretically addresses individual robustness issues, the claim that this combination improves overall robustness is not convincingly supported by the data. The dataset should demonstrate the robustness of the proposed method, not just improvements in accuracy. Technical Quality: 2 Clarity: 2 Questions for Authors: Please list up and carefully describe any questions and suggestions for the authors. Think of the things where a response from the author can change your opinion, clarify a confusion, or address a limitation. This is important for a productive rebuttal and discussion phase with the authors. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: see weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **(W1) Support for advantages of integrating sEMG signals with 3D Hand Posture data** - We would like to clarify that we performed an ablation study (Table 2 and 3) and discussed study results in the corresponding sections. These tables compare the performance of our model using: (1) sEMG data only[R3], (2) sEMG data with hand posture data using angle representation, and (3) sEMG data with 3D hand pose data(Ours). We primarily discussed this analysis in Section 5.2.1 highlighting the benefits of combining sEMG and 3D hand pose information. We acknowledge that our initial explanation of the motivation might not be sufficiently convincing for readers. To provide stronger empirical motivation for our approach, we will add a new figure illustrating cases where similar sEMG patterns are observed despite pressure being applied to different parts of the hand. Figure R4 demonstrates this phenomenon. The figure displays the time-series representation of 8-channel sEMG signals over 5 seconds for two pairs of hand postures: (1) I-Press and M-Press, (2) TI-Pinch and TM-Pinch. Each row represents a different sEMG channel. In both pairs, the pressure is applied differently, targeting either the index fingertip or the middle fingertip. However, we observe very similar sEMG patterns between I-Press and M-Press, and between TI-Pinch and TM-Pinch. This observation demonstrates that while sEMG signals provide valuable information about force, they may not be sufficient for fine-grained pressure localization on their own. This highlights the potential of incorporating hand posture information to guide pressure estimation. We will incorporate this discussion and Figure R4 into the revised manuscript, adding a new second paragraph in Section 4.1 (Llines 172-173) and a new Appendix section titled "Empirical motivation of our framework" to further clarify the rationale behind our approach. **(W2) Comparison with vision-based datasets/techniques extracting 3D hand posture info** - We agree that we did not comprehensively cover various vision-based datasets or methodologies for 3D hand estimation as related works or comparative methodologies. However, please take it into consideration that the ultimate goal of our research is to estimate hand pressure, not to derive 3D hand posture. For this reason, we selected comparative methodologies and datasets aimed at estimating hand contact or hand pressure rather than 3D hand posture estimation. Since the ultimate goal of our research is hand pressure estimation, we believe that comparisons with 3D hand pose estimation methodologies and datasets may not be contextually consistent. However, we also found out that we missed highly related dataset paper such as ActionSense from NeurIPS 2022 which will be added as part of our related works. **(W3) Clarification on the meaning of the robustness in our paper** - In our manuscript, the 'robustness' of the model refers to the model’s capability to ensure high accuracy across various postures over different hand regions. Here, we have detailed the performance across different hand regions (Table 3) for various postures (Figure 3) to show our model’s robustness. Thanks to Reviewer 3PTe, we understand that the term 'robustness' was understood differently by readers, we will revise it to clearly convey our intended meaning. Specifically, we will revise the following lines as follows: - (Line 313) “This consistent correlation between predicted values and actual pressure measurements highlights the model’s robustness and adaptability in handling a wide range of hand movements.” → “This consistent correlation between predicted values and actual pressure measurements highlights the model’s ability to maintain high accuracy and reliability across a diverse range of hand parts and postures, thereby demonstrating its robustness.” - (Line 331) “Figure 4b shows demo video footage illustrating our framework’s robustness in estimating hand pressure while continuously changing hand posture, pressure levels, and the objects being grasped.” → “"Figure 4b shows demo video footage illustrating our framework’s capability in accurately estimating hand pressure while continuously changing hand posture, pressure levels, and the objects being grasped." --- Rebuttal Comment 1.1: Comment: We deeply appreciate your meticulous review of our manuscript and the insightful suggestions provided. Your feedback has been instrumental in guiding our revisions, and we have diligently addressed the remaining points raised in your comments. As the discussion phase concludes, we are confident that the detailed clarifications and newly suggested empirical motiation, as presented in our rebuttal, provide a compelling testament to the effectiveness of our proposed method. We sincerely hope that these revisions have adequately resolved your concerns. We remain available to address any further questions you may have and are grateful for your valuable contribution to improving the quality of our work.
Rebuttal 1: Rebuttal: We would like to thank reviewers for the constructive feedback. We have been putting our best effort to address the weaknesses and questions pointed by reviewers to strengthen the paper. Our rebuttal PDF includes new experimental results, diagrams, and tables to support our work. In our rebuttal comments, tables and figure in the rebuttal PDF are referenced as Figure Rx & Table Rx while figures and tables in the main paper are denoted as Figure x and Table x. Below, we have listed the references used in our rebuttal. We summarize the two major changes and updates we have made during the rebuttal period: ### **(Joint Response 1) Quantitative comparison with vision-based method (PressureVision++[R1])** We appreciate the reviewer's suggestion for a quantitative comparison with vision-based methods. Yes, while PressureVision methods are inherently limited in their application due to the necessity of a glove for whole-hand pressure measurement, a quantitative comparison would provide valuable insights. To address this, we conducted a quantitative evaluation of PressureVision++ using the same equipment as the original PressureVision++ study: a Logitech Brio 4k webcam and a Sensel Morph pressure sensing array. As PressureVision++ estimates pressure only on fingertips and requires full visibility of the hand within the camera view, we focused our evaluation on plane and pinch interaction sets, specifically: I-Press, M-Press, R-Press, P-Press, IM-Press, MR-Press, TI-Pinch, TM-Pinch, TIM-Pinch, TIMR-Pinch, and TIMRP-Pinch. Following our data collection protocol, each participant was instructed to repeat each action for 30 seconds. To ensure optimal visibility for PressureVision++, we carefully adjusted the camera angle to capture both the fingers and palm. Figure R2 and Figure R3 illustrate the data collection setup for PressureVision++. We collected data from 5 participants using this setup, mirroring the methodology of PressureVision++. Please note that the Sensel Morph could not measure thumb force during pinch actions, so this data was excluded from the analysis. Therefore, Table R2 and Table R4 present the performance comparison between PressureVision++ and our model, focusing on the pressure exerted by the tips of the index, middle, ring, and pinky fingers for the specified action set. It is important to note that our quantitative evaluation of PressureVision++ is inherently a cross-user performance report, as the model was trained on a separate dataset. For fairness and consistency, we report the cross-user performance for "sEMG Only[R3]" and "sEMG + 3D Hand Posture(Ours)" in Table R2 and Table R4 as well. Table R2 provides a comparison across all action sets for the three main metrics, while Table R4 offers a detailed breakdown of fingertip performance for plane interactions and pinch actions. As expected, there is a general decrease in performance when moving from within-user to cross-user evaluation. For instance, in our method, we observe a decrease in R² from 88.86% to 66.71%, in NRMSE from 6.65% to 9.27%, and in accuracy from 83.17% to 82.20%. At the same time, as shown in Tables R2 and R4, our method still significantly outperforms both PressureVision++ (66.60% in accuracy) and the sEMG only model (67.90% in accuracy) in the cross-user evaluation. This indicates the superiority of our method in both within-user and cross-user scenarios. ### **(Joint Response 2) Cross-user Evaluation** We acknowledge the reviewer's concern regarding the limited within-participant evaluation and the potential variability of sEMG signals across individuals. To address this, we conducted additional experiments in a cross-user setting to measure the generalizability of our model. Specifically, we used all session data acquired from 16 participants as our training set, while data from the remaining 4 participants was allocated to the test set. This ensures that the model is completely blind to the test participants during both training and evaluation. Table R1 and R3 present the results of this cross-user evaluation, comparing our model's performance against the primary baseline method Force-aware interface[R3]. Furthermore, in the rebuttal PDF file, Table R2 and Table R4 provide a breakdown of our performance for index, middle, ring, and pinky finger pressures during plane and pinch interactions, specifically for comparison against PressureVision++. The results from these cross-user evaluations demonstrate that our approach consistently outperforms existing methods[R1,R3] even when encountering unseen participants. For a more detailed analysis of our performance compared to PressureVision++, please refer to our response regarding the quantitative evaluation of PressureVision++ below. We will add new results and discussion as a new subsection titled “Cross-User Evaluation” in Section D (Line 1046-1047) of the revised manuscript. Please refer to reviewer-specific feedback and a one-page PDF with a summary of added experimental results. ### References - [R1] Grady, Patrick, et al. "PressureVision++: Estimating Fingertip Pressure from Diverse RGB Images.", CVPR 2024. - [R2] Grady, Patrick, et al. "PressureVision: estimating hand pressure from a single RGB image." ECCV 2022. - [R3] Zhang, Yunxiang, et al. "Force-aware interface via electromyography for natural VR/AR interaction." ACM Transactions on Graphics (TOG) 2022. - [R4] Lalitharatne, Thilina Dulantha, et al. "A study on effects of muscle fatigue on EMG-based control for human upper-limb power-assist." ICIAfS 2012. - [R5] Eddy, Ethan, Erik J. Scheme, and Scott Bateman. "A framework and call to action for the future development of EMG-based input in HCI." ACM CHI 2023. Pdf: /pdf/84cc70a5b0f7d2867328533710faa21877f9a634.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Opponent Modeling based on Subgoal Inference
Accept (poster)
Summary: The paper proposes an algorithm for learning policy in a multi-agent environment that predicts subgoal intents of other agents. In this way, the authors address the non-stationarity problem in multi-agent learning. Experiments in three environments show an advantage of the approach over baselines. Strengths: - The proposed method is simple and interesting. - The paper is well-organized and mostly clear. - Pseudocode and data flow charts are helpful. Weaknesses: - Experiments: It's a bit disappointing that in most of the presented experiments, the confidence intervals of OMG and baselines generally overlap (except in the predator-prey environment). I understand that controlling a single agent makes it hard to jump from 10% to 90%. However, I suggest choosing environments that amplify the impact of a single agent. For instance, the 3s_vs_5z environment seems well-chosen (hence the highest advantage). In either case, increasing the number of trials to make confidence intervals tighter would also help. - The authors are sometimes overconfident in their claims. "Obviously, the complexity and uncertainty of predicting the action sequence are much higher than the goal itself" -- this is not always obvious. I agree that in some cases it is true, but there are also many domains where minor action changes lead to completely different states, resulting in much higher uncertainty in predicting subgoals. Instead of suggesting that OMG is always better, I recommend analyzing in which contexts OMG is better. Also, "... the results show they closely correlate with the opponent's trajectory" -- I acknowledge the reported 12-14% hit ratio, particularly given that the intents may change, but I'm not convinced that's enough evidence to claim "closely correlated". - Equations 6-7: It's confusing that g in V(s_t, g) is the opponent's subgoal, while the notation resembles the standard goal-conditioned value function. I suggest replacing it with V(s_t, g^o), similar to a^o, or using any other indication that it's the opponent's subgoal. Additionally, to analyze the impact of choosing either (6) or (7), you should include an ablation (even a single experiment with just the results provided) in which you compare to naive choices, such as a random state or a fixed-distance state (e.g., s_t+3). This is especially important given that in Appendix D you report that 80% of subgoals are <=3 steps. Minor comments: - l.139: Technically, by modeling the opponents' subgoals, you don't gain any new information because everything is computed from the observation. - l.148: Why is |G| finite? I think you didn't assume the finiteness of anything, including the action space. - l.154-157: This claim needs stronger support to be that general. The experiment you show here is an illustrative example, too small to frame the takeaway so broadly. - l.160: Why are there fewer subgoal tuples? I understand that both methods create exactly one datapoint for every step. If that's because of duplicates in the buffer, then this example may not be fully relevant (as in other environments, it is nearly impossible to repeat exactly the same state). - l.297: I'd also mention here which value of H you've chosen. Technical Quality: 3 Clarity: 3 Questions for Authors: What is the speed of training and inference of OMG compared to Naive OM and standard RL baselines (wall time)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: One limitation is mentioned in Section 6. Actually, I don't see why OMG cannot handle open systems. However, I'd add other limitations, such as: modeling subgoals instead of actions is much harder if the state space is much more complex than the action space (e.g., in video games). Additionally, predicting future states may be prohibitively challenging if the policy strongly depends on the opponent's moves (e.g., chess). There is no potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. As follows, we address your concerns in detail. > Experiments: It's a bit disappointing that in most of the presented experiments, the confidence intervals of OMG and baselines generally overlap (except in the predator-prey environment). I understand that controlling a single agent makes it hard to jump from 10% to 90%. However, I suggest choosing environments that amplify the impact of a single agent. For instance, the 3s_vs_5z environment seems well-chosen (hence the highest advantage). In either case, increasing the number of trials to make confidence intervals tighter would also help. While OMG does not show a significant advantage in the experiments, it maintains a modest but consistent edge. This is reasonable, considering that opponent-predicting ability is not the sole factor influencing performance. We have increased the number of runs to 10 seeds for Predator-prey, and the result presents in the PDF file of "global" response. > Equations 6-7: It's confusing that g in V(s_t, g) is the opponent's subgoal, while the notation resembles the standard goal-conditioned value function. I suggest replacing it with V(s_t, g^o), similar to a^o, or using any other indication that it's the opponent's subgoal. Additionally, to analyze the impact of choosing either (6) or (7), you should include an ablation (even a single experiment with just the results provided) in which you compare to naive choices, such as a random state or a fixed-distance state (e.g., s_t+3). This is especially important given that in Appendix D you report that 80% of subgoals are <=3 steps. Following your suggestion, we have added ablation experiments. In these experiments, OMG-random, OMG-1s, and OMG-3s represent subgoals selected from the opponent's future states randomly, at the next step, and at three steps ahead, respectively. The results present in the PDF file of "global" response. > l.139: Technically, by modeling the opponents' subgoals, you don't gain any new information because everything is computed from the observation. l.148: Why is |G| finite? I think you didn't assume the finiteness of anything, including the action space. Thank you for your correction. Indeed, these statements are imprecise. We are not limited to discrete spaces; in continuous spaces, the state space is infinite. We will correct this. > l.154-157: This claim needs stronger support to be that general. The experiment you show here is an illustrative example, too small to frame the takeaway so broadly. We understand your concern regarding the strength of the statement about the extension of Q-values with subgoals. However, we believe that illustrative example prove the extension of Q-table works. Additionally, similar pattern has been used in the literature [1]. [1] Ashley Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, and Jason Yosinski. Estimating q (s, s’) with deep deterministic dynamics gradients. In International Conference on Machine Learning, pages 2825–2835. PMLR, 2020 > l.160: Why are there fewer subgoal tuples? I understand that both methods create exactly one datapoint for every step. If that's because of duplicates in the buffer, then this example may not be fully relevant (as in other environments, it is nearly impossible to repeat exactly the same state). In this experiment, we adopt the Q-learning algorithm. These tuples represent the cells in the Q-table. In the example shown in Figure 3, $(\mathcal{A} = \mathcal{A}^o = \{ \text{none}, \uparrow, \downarrow, \leftarrow, \rightarrow \})$ and $(\mathcal{G} = \{D_1, D_2\})$. In the Q-table, the number of entries for $(s, g, a)$ is $(10 \cdot |S|)$, which is fewer than the number of entries for $(s, a^o, a)$. Due to exploration, it is practically impossible to update all cells in the Q-table. During my experiments, I observed that the number of cells used in the Q-table with $(s, a^o, a)$ was indeed fewer. > l.297: I'd also mention here which value of H you've chosen. The hyperparameters are presented in Appendix B.In both Foraging and Predator-Prey, the subgoal horizon $H$ is 5. In SMAC, $H$ is also 10. Appendix B lists the hyperparameters. The subgoal horizon $H$ is 5 for Foraging and Predator-Prey, and 10 for SMAC. > What is the speed of training and inference of OMG compared to Naive OM and standard RL baselines (wall time)? The training and test times are shown in the table below. OMG requires additional training time to compute subgoals from the buffer, as described in Eq. 6 and Eq.7. | | OMG | Naive OM | D3QN | |:-----:|:-----:|:-----:|:-----:| | training time (minutes) | 848 | 588 | 273 | | test time (minutes) | 38 | 36 | 28 | --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and additional experiments. > ... the confidence intervals of OMG and baselines generally overlap ... > We have increased the number of runs to 10 seeds for Predator-prey ... Actually, the predator-prey results were fine, I was more concerned about the results in foraging (where the confidence intervals are by an order of magnitude larger than the difference between OMG, naive OM, and LIAM) and partially about 8m. Could you please clarify what are the takeaways from the experiments in Foraging? If the algorithms differ mostly in terms of the number of solution steps, I'm not sure whether this is uniformly reinforced across the methods, or just a chance (given similar scores). Since you provide some experiments in SMAC, I suggest trying more of them instead, particularly: 2m_vs_1z, corridor, 2s_vs_1sc, 2c_vs_64zg. In those tasks the impact of a single agent would be much more visible, and, usually, crucial for the success of the whole team. It would be exciting to see that two agents learn a cooperative tactic to win, and then OMG enters and it's able to adapt to that tactic. Especially in corridor, where precise coordination is crucial. Do you think it should be the case? I'm not sure if OMG would be able to follow a subtle tactic off-the-shelf (and hence it may be worse than the original team of agents, which is totally fine), but based on your motivation it should perform better than other variants of opponent modelling, right? --- Reply to Comment 1.1.1: Title: Response to Reviewer FFHk Comment: Thanks for your valuable comments. > Actually, the predator-prey results were fine, I was more concerned about the results in foraging (where the confidence intervals are by an order of magnitude larger than the difference between OMG, naive OM, and LIAM) and partially about 8m. Could you please clarify what are the takeaways from the experiments in Foraging? If the algorithms differ mostly in terms of the number of solution steps, I'm not sure whether this is uniformly reinforced across the methods, or just a chance (given similar scores). The results of Foraging experiments are shown in the following table: | | score | episode_step | |:-----:|:-----:|:-----:| | Naive OM | 6.95 ± 0.33 | 11.61 ± 0.41 | | LIAM | 6.99 ± 0.27 | 11.34 ± 0.37 | | D3QN | 6.35 ± 0.2 | 12.57 ± 0.19 | | OMG-conserv | 7.15 ± 0.31 | 10.91 ± 0.15 | | OMG-optim | 6.83 ± 0.14 | 11.45 ± 0.26 | In fact, the confidence intervals are not orders of magnitude apart. The chart may look a bit more exaggerated. The ability to predict opponent's action/subgoal is not the only factor that determines the performance. In the Foraging, the suboptimal strategy adjusts its trajectory towards the correct target after 1-2 steps of hysteresis. The baseline algorithms may realize that the target is invalid until the opponent reaches the target point near it, our method predicts this earlier, allowing for a earlier switch to an alternative target. However, due to the random initial positions, not every episode presents a dominant opportunity. A statistical difference of 0.5 at episode steps indicates a substantial advantage in this context. > Since you provide some experiments in SMAC, I suggest trying more of them instead, particularly: 2m_vs_1z, corridor, 2s_vs_1sc, 2c_vs_64zg. In those tasks the impact of a single agent would be much more visible, and, usually, crucial for the success of the whole team. It would be exciting to see that two agents learn a cooperative tactic to win, and then OMG enters and it's able to adapt to that tactic. Especially in corridor, where precise coordination is crucial. Do you think it should be the case? I'm not sure if OMG would be able to follow a subtle tactic off-the-shelf (and hence it may be worse than the original team of agents, which is totally fine), but based on your motivation it should perform better than other variants of opponent modelling, right? Thank you for your suggestion! OMG's advantage over other opponent modeling algorithms that predict actions lies essentially in its ability to predict further steps. Admittedly, not all tasks require this level of prediction. We'll follow your advice by testing on more maps and will share the results here.
Summary: For cooperative games and general-sum games, this paper proposes opponent modeling by inferring an opponent’s subgoals, rather than inferring actions. They empirically verify that this leads to either similar or better scores over baselines in Foraging (discrete grid game), Predator-Pray (continuous game), and SMAC (high dimensional), as well as a reduction in required timesteps per episode to reach comparable scores to baseline. Strengths: - The paper is well organized, provides ample context and prior research, and solid empirical evidence to prove its method. - The idea of predicting subgoals rather than actions is rather intuitive, and the authors are clear in its formalization. - The method is novel. - The paper is fair and thorough in its comparison to baselines. Environments are diverse (discrete, continuous, high dimensional), baseline+OMG share the same neural network architecture, tests are run across 5 seeds and plots provide mean+standard deviation. - A dedicated ablation studies section clarifies the reasoning behind the choice of VAE, horizon, and subgoal selection. - Code is provided in supplementary material and will be open sourced upon acceptance Weaknesses: - Improvements over scores in baselines are minor in Foraging. The main difference between the baselines appears to be in the reduction of the episode steps. It would be interesting to see if there is a reduction in the average time steps towards the end of training in Predator-Prey. - Minor typos and clarifications requested in Questions. Technical Quality: 4 Clarity: 3 Questions for Authors: - L152 — missing word? “will reach the same ” → “will reach the same state” - L159-160 — confusing sentence: “...resulting from the tuple (s, a^o, a) is more than numerous than (s,g,a) in the Q-table” - L190 — confusing preposition: “state as…” → “state of”? - L204 and L206-207 — both sentences here claim seemingly contradictory statements: “...adopting an optimistic strategy akin to the minimax strategy, which applies to cooperative games” seems to contradict the following statement “...leading to a conservative strategy similar to the minimax strategy, which is commonly used for general-sum games”. Are these both similar to minimax? - Section 4.2: Subgoal Selector. This section is a bit hard to follow the reasoning for Eq. 6 and Eq. 7 for OMG-optimistic and OMG-conservative, even after referring to 4.1 Can you clarify in this section why you provide these two distinct manners for subgoal selection? - Section 5, Figure 5 — It is not immediately clear what the X-axis is representing. The numbers in front of “non-homologue” and “homologue” do not have context (they are only clear from reading the Appendix). Can you add a sentence to the description of the Figure? Can you expand on the sentence: “The X-axis represents the opponent’s policies, and “homologue” refers to the policy learned by the same algorithm, while “non-homologue” represents different ones”? - L294 — possible typo: “interrupted” → “interpreted” - Figure 7b — OMG-conserv shows step 1, 5, 6, and 8; OMG-optim shows steps 1, 5, 8, 10. Is there a reason the steps are not matched? (i.e both showing 1,5,6,8 or 1,5,8,10) - L299-300 — the wording of the definition for hit ratio is a bit unclear. Is this essentially the ratio of predicted opponent trajectory to actual opponent trajectory? - Section 5.5: How do you determine the hit ratio if the subgoal is several steps ahead of the current state? For example, if a predicted subgoal is 3 steps north, the opponent could reach the subgoal in 3 steps or more steps. How many steps can an opponent take between the subgoal prediction state in order for it to still count as a hit? - L479 — Typo: “ovservation” → “observation” - L504 — Typo: “remained” → “remaining” Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging our novel contributions as well as raising valuable questions. > Improvements over scores in baselines are minor in Foraging. The main difference between the baselines appears to be in the reduction of the episode steps. It would be interesting to see if there is a reduction in the average time steps towards the end of training in Predator-Prey. The Predator-Prey has a fixed episode length. The score represents the number of times the predators touch the prey. Therefore, this result already reflects a reduction in the average touch time. > L152 — missing word? “will reach the same ” → “will reach the same state” Thank you for your correction. In this paper, goal $g$ represents a feature embedding of a future state. Therefore it is more accurate to use "will reach the same feature embedding" and we will fix it. > L159-160 — confusing sentence: “...resulting from the tuple (s, a^o, a) is more than numerous than (s,g,a) in the Q-table” In the example shown in Figure 3, $(\mathcal{A} = \mathcal{A}^o = \{ \text{none}, \uparrow, \downarrow, \leftarrow, \rightarrow \})$ and $(\mathcal{G} = \{D_1, D_2\})$. In the Q-table, the number of cells for $(s, g, a)$ is $(10 \cdot |S|)$, which is fewer than the number of cells for $(s, a^o, a)$. > L190 — confusing preposition: “state as…” → “state of”? This sentence implies that the selected state is being treated or used as a subgoal. Emphasizes the role or function of the state in the context of the model. We modify it in the revision to disambiguate. > L204 and L206-207 — both sentences here claim seemingly contradictory statements: “...adopting an optimistic strategy akin to the minimax strategy, which applies to cooperative games” seems to contradict the following statement “...leading to a conservative strategy similar to the minimax strategy, which is commonly used for general-sum games”. Are these both similar to minimax? In Line 204, the maximax strategy is used, while Line 206 employs the minimax strategy. These two strategies are different. > Section 4.2: Subgoal Selector. This section is a bit hard to follow the reasoning for Eq. 6 and Eq. 7 for OMG-optimistic and OMG-conservative, even after referring to 4.1 Can you clarify in this section why you provide these two distinct manners for subgoal selection? The maximax strategy aims to maximize the maximum possible gain. It is an optimistic approach that focuses on the best possible outcome. The formula is as follows: $a = \arg\max_{a_i \in \mathcal{A}} \left(\max_{a_o \in A_o} (P(a_i, a_o)) \right)$. The minimax strategy focuses on maximize the minimum gain. The formula is as follows: $a = \arg\max_{a_i \in \mathcal{A}} \left(\min_{a_o \in A_o} (P(a_i, a_o)) \right)$ In Eq.6 corresponds to the inner "max" in the maximax strategy, and Eq.7 corresponds to the inner "min" in the minimax strategy. RL corresponds to the outer "max" of both. > Figure 7b — OMG-conserv shows step 1, 5, 6, and 8; OMG-optim shows steps 1, 5, 8, 10. Is there a reason the steps are not matched? (i.e both showing 1,5,6,8 or 1,5,8,10) The trajectories formed by the two algorithms have different lengths and cannot be matched exactly. We will release the full trace instead of the key steps in the appendix. > L299-300 — the wording of the definition for hit ratio is a bit unclear. Is this essentially the ratio of predicted opponent trajectory to actual opponent trajectory? > Section 5.5: How do you determine the hit ratio if the subgoal is several steps ahead of the current state? For example, if a predicted subgoal is 3 steps north, the opponent could reach the subgoal in 3 steps or more steps. How many steps can an opponent take between the subgoal prediction state in order for it to still count as a hit? For example, the opponent’s trajectory sequence from $ t=0 $ to $ t=4 $ is $(s_1, s_2, s_3, s_4, s_5)$. The agent's prediction from $ t=0 $ to $ t=3 $ is $(s_3, s_1, s_5, s_5)$. The hit ratio is calculated as $\frac{|\{s_3, s_5\}|}{|\{s_1, s_2, s_3, s_4, s_5\}|} = 0.4$. The predicted $s_1$ at $ t=1 $ is not counted because $s_1$ is not present in the trajectory sequence from $t>=1$. We'll add a detailed explanation in the appendix. > Section 5, Figure 5 — It is not immediately clear what the X-axis is representing. The numbers in front of “non-homologue” and “homologue” do not have context (they are only clear from reading the Appendix). Can you add a sentence to the description of the Figure? Can you expand on the sentence: “The X-axis represents the opponent’s policies, and “homologue” refers to the policy learned by the same algorithm, while “non-homologue” represents different ones”? > L294 — possible typo: “interrupted” → “interpreted” > L479 — Typo: “ovservation” → “observation” > L504 — Typo: “remained” → “remaining” Thank you for your suggestions and corrections. We will fix it in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your response to my comments. I have read your rebuttal and will provide further comments soon, as needed.
Summary: The paper is positioned within the ad-hoc teamwork problem in multi-agent systems, where an agent faces partners it has not seen before in training, and must learn to cooperate with them towards performing a task that benefits from cooperation. Authors deal with a specific requirement of ad-hoc teamwork that is opponent modelling: an agent's ability to model other agents in the system. They propose to model other agents at the level of their subgoals, inferred from the observed trajectories of interaction. They propose to learn a Q-function that also takes the inferred opponent sub-goal as input, thus making the policy of the protagonist agent conditioned on the sub-goal of the opponent. The empirical results seem to suggest minor improvements over previous opponent modelling works. Strengths: **Originality** - I do not believe there is much strength in terms of originality. **Quality** - The submission seems technically sound to me _except_ for the definition of a sub-goal. A sub-goal is defined as a feature embedding of a future state. This is not sound to me (see Weaknesses). **Clarity** - The clarity of the submission is slightly above average. **General Comments** As someone from this particular niche, I believe the paper is studying an important problem. Overall, I believe their approach of augmenting state-spaces with inferred information about the opponents is perhaps an under-explored direction. Weaknesses: **Originality** - I do not believe that goal and sub-goal inference for modelling agents is original. The paper is missing an entire line of research in their related works here, namely the inverse planning literature. This community has been studying inferring goals and sub-goals from trajectories through probabilistic modelling of other agents for a long time now (e.g. see [1,2] as a starting point and follow the thread of references and citations). In addition, inferring and modelling goals of others is not original also in the ad-hoc teamwork literature (see [3,4]). Others also considered goal recognition between agents actively [5]. These are only some of the papers in this line of work, as there are many more. I am surprised to see none of these works are cited. I would like to see a discussion on what is the novelty/originality left after this literature is accounted for. Conditioning the Q-function on the goal of other is also not an original idea, but this is not the main claim any way. Additionally, the problem of learning to cooperate with previously unseen teammates has been defined and named as "ad-hoc teamwork" in 2010 [6]. Since it is literally the problem setting of the authors, I am surprised there is no mention of this and the original paper is not cited. [1] Zhi-Xuan T, Mann J, Silver T, Tenenbaum J, Mansinghka V. Online bayesian goal inference for boundedly rational planning agents. Advances in neural information processing systems. 2020;33:19238-50 [2] Ying L, Zhi-Xuan T, Mansinghka V, Tenenbaum JB. Inferring the goals of communicating agents from actions and instructions. InProceedings of the AAAI Symposium Series 2023 (Vol. 2, No. 1, pp. 26-33). [3] Melo FS, Sardinha A. Ad hoc teamwork by learning teammates’ task. Autonomous Agents and Multi-Agent Systems. 2016 Mar;30:175-219. [4] Chen S, Andrejczuk E, Irissappane AA, Zhang J. ATSIS: achieving the ad hoc teamwork by sub-task inference and selection. InProceedings of the 28th International Joint Conference on Artificial Intelligence 2019 Aug 10 (pp. 172-179). [5] Shvo M, McIlraith SA. Active goal recognition. InProceedings of the AAAI Conference on Artificial Intelligence 2020 Apr 3 (Vol. 34, No. 06, pp. 9957-9966). [6] Stone P, Kaminka G, Kraus S, Rosenschein J. Ad hoc autonomous agent teams: Collaboration without pre-coordination. InProceedings of the AAAI Conference on Artificial Intelligence 2010 Jul 5 (Vol. 24, No. 1, pp. 1504-1509). **Quality** - The experimental section only compares to two relatively simple opponent modelling works LIAM and "Naive OM". However, the authors for instance literally cite the "Machine Theory of Mind" paper and the "Modelling others using oneself in multi-agent reinforcement learning" in related works. Surely these are the closest competitors to their method. Additionally, some of the papers listed above under originality are also close competitors. I believe you need more than two methods in opponent modelling baselines. Especially the two methods compared are by no means considered the state-of-the-art. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1: "_However, focusing on the opponent’s action is shortsighted, which also constrains the adaptability to unknown opponents in complex tasks_" What do you mean by "shortsighted"? How does that _constrain_ the adaptability? These are strong statements but are not concretised at all. Q2: "_...bridge the information gap between agents_" This concept of _information gap_ appeared out of nowhere. It is not explained either. What does this even mean? It needs to be clarified. Q3: "_Autonomous agents, different from those jointly trained, can act autonomously in complex and dynamic environments..._" You seem to be conflating autonomy with decentralized training or being self-interested here. The definition of autonomy does not preclude centralised training. I can have a set of autonomous agents train with privileged information, yet deploy them to act autonomously. Q4: "_Although a lot of the existing methods concentrate on modeling the opponent’s actions, we argue that such an approach is short-sighted, pedantical, and highly complex._" Again, lots of sweeping and strong statements with no backing or clarification. What does short-sighted, pedantical, and highly complex mean? This does not sound professional or scientific. If you are going to actually complain about an entire line of literature, you need to make your statement more concrete. Q5: "_Generally, modeling an opponent’s actions just predicts what it will do at the next step. Intuitively, it is more beneficial for the agent to make decisions if it knows the situation of the opponent several steps ahead_" This statement seems to ignore the trajectory prediction literature entirely, and tries to get away with it by saying "generally". Q6: "_Other methods that claim to predict the opponent’s goal [28 , 29], but without explicitly making a connection to the opponent’s goal..._" What does this even mean? Also, see the inverse planning / goal recognition works cited in Weaknesses and their references/citations for works that predict goals explicitly. Q7: "_Unlike these methods, in this paper, we consider the most common..._" I would argue that the setting where other _autonomous_ agents are also learning and adapting to our agent is the more common setting, and the fixed opponent policy is a big simplification of it. This can be justified in certain cases, but it certainly is not the _most common,_ except that it is more common in literature because it is simpler. But if the harder and more realistic problem is already being tackled, I do not see why this point is a plus. Q8: The $\pi^o, a^o$ notation is confusing. I would recommend using the pre-established notational norms in game theory / MARL, $\pi^{-i} , a^{-i}$ for all agents except $i$. Q9: "_Opponent modeling typically predicts the actions of other agents to address the non-stationary problem_." At this point, I am a little confused. You have said "_Unlike these methods, in this paper, we consider the most common setting where opponents have unseen, diverse, but **fixed policies** during test_." So the opponents have fixed policies during test. Then there is absolutely no non-stationarity during test time. In fact, since $\pi^o$ is fixed within a test episode, the problem the agent is facing in test time is in fact an MDP. So you are only dealing with the setting where at each episode the agent might be facing a new MDP. How is this different from online single-agent reinforcement learning then? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Part Ⅰ Thank you for valuable comments. Below is a detailed response to your question, addressing each point individually. > I do not believe that goal and sub-goal inference for modelling agents is original. The paper is missing an entire line of research in their related works here, namely the inverse planning literature. This community has been studying inferring goals and sub-goals from trajectories through probabilistic modelling of other agents for a long time now (e.g. see [1,2] as a starting point and follow the thread of references and citations). In addition, inferring and modelling goals of others is not original also in the ad-hoc teamwork literature (see [3,4]). Others also considered goal recognition between agents actively [5]. These are only some of the papers in this line of work, as there are many more. I am surprised to see none of these works are cited. I would like to see a discussion on what is the novelty/originality left after this literature is accounted for. Conditioning the Q-function on the goal of other is also not an original idea, but this is not the main claim any way. Additionally, the problem of learning to cooperate with previously unseen teammates has been defined and named as "ad-hoc teamwork" in 2010 [6]. Since it is literally the problem setting of the authors, I am surprised there is no mention of this and the original paper is not cited. We will add more relevant references to enhance the related works section. Below, we explain the differences between our work and the references you mentioned to highlight our contributions and originality. Our method represents an innovation in the field of opponent modeling. As described, OMG is applicable to both cooperative games and general-sum games, and is not limited to "ad-hoc teamwork" [3,4,6]. OMG emphasizes methodological innovation rather than focusing solely on the ability to solve specific tasks. Additionally, inferring goals is a straightforward concept and is commonly addressed in numerous papers [5, 7, 8, 9]. However, in the context of opponent modeling for autonomous agents, this method is novel. Unlike the method in [2], which requires agents to communicate with each other, our setting involves autonomous agents that cannot communicate with other agents. Model-based methods [1] require constructing an environmental model and involve extensive computation for planning during execution. This approach differs from the model-free method adopted in this work. > The experimental section only compares to two relatively simple opponent modelling works LIAM and "Naive OM". However, the authors for instance literally cite the "Machine Theory of Mind" paper and the "Modelling others using oneself in multi-agent reinforcement learning" in related works. Surely these are the closest competitors to their method. Additionally, some of the papers listed above under originality are also close competitors. I believe you need more than two methods in opponent modelling baselines. Especially the two methods compared are by no means considered the state-of-the-art. OMG differs from traditional action prediction methods in opponent modeling by introducing a new approach based on subgoal inference. Therefore, we selected related methods as baselines. To the best of our knowledge, we have compared state-of-the-art (SOTA) methods in this domain. Below, we will explain why certain methods were not included as baselines. [1] describes a planning algorithm using "Sequential Inverse Plan Search," which is fundamentally different from the model-free algorithm used in this paper. The only similarity is the concept of "goal inference." In [2], agents are able to communicate with each other, unlike our setting with autonomous agents that cannot communicate. References [3, 4, 6] focus on ad-hoc teamwork. The method in [3] does not use deep learning and is difficult to apply to complex environments like SMAC. [4] uses Goal-Conditioned RL (GCRL) and requires goals to be manually defined, whereas OMG does not. Additionally, goal-based reward shaping limits [4] to cooperative tasks. The problem addressed in [5] is active goal recognition, which is quite different from our approach of optimizing policies through opponent modeling. We have conducted a comprehensive survey of related work and compared all relevant baselines under fair conditions. [1] Zhi-Xuan T, Mann J, Silver T, Tenenbaum J, Mansinghka V. Online bayesian goal inference for boundedly rational planning agents. Advances in neural information processing systems. 2020;33:19238-50 [2] Ying L, Zhi-Xuan T, Mansinghka V, Tenenbaum JB. Inferring the goals of communicating agents from actions and instructions. InProceedings of the AAAI Symposium Series 2023 (Vol. 2, No. 1, pp. 26-33). [3] Melo FS, Sardinha A. Ad hoc teamwork by learning teammates’ task. Autonomous Agents and Multi-Agent Systems. 2016 Mar;30:175-219. [4] Chen S, Andrejczuk E, Irissappane AA, Zhang J. ATSIS: achieving the ad hoc teamwork by sub-task inference and selection. InProceedings of the 28th International Joint Conference on Artificial Intelligence 2019 Aug 10 (pp. 172-179). [5] Shvo M, McIlraith SA. Active goal recognition. In Proceedings of the AAAI Conference on Artificial Intelligence 2020 Apr 3 (Vol. 34, No. 06, pp. 9957-9966). [6] Stone P, Kaminka G, Kraus S, Rosenschein J. Ad hoc autonomous agent teams: Collaboration without pre-coordination. AAAI 2010. [7] Silviu Pitis, Harris Chan, Stephen Zhao, Bradly Stadie, and Jimmy Ba. Maximum entropy gain exploration for long horizon multi-goal reinforcement learning. ICML, 2020. [8] Suraj Nair, Silvio Savarese, and Chelsea Finn. Goal-aware prediction: Learning to model what matters. ICML, 2020. [9] Menghui Zhu, Minghuan Liu, Jian Shen, Zhicheng Zhang, Sheng Chen, Weinan Zhang, Deheng Ye, Yong Yu, Qiang Fu, and Wei Yang. Mapgo: Modelassisted policy optimization for goal-oriented tasks. IJCAI, 2021. --- Rebuttal 2: Title: Rebuttal(Part Ⅱ) Comment: ## Part Ⅱ > Q1: "However, focusing on the opponent’s action is shortsighted, which also constrains the adaptability to unknown opponents in complex tasks". What do you mean by "shortsighted"? How does that constrain the adaptability? These are strong statements but are not concretised at all. Generally, modeling an opponent’s actions just predicts what it will do at the next step. A lot of opponent modeling papers[1,2,3] predict the next step action. Intuitively, it is more beneficial for the agent to make decisions if it knows the situation of the opponent several steps ahead. Predicting future states of the opponent have an advantage over predicting future actions. Just like our example in Sec 1 Para 3: *For example, to reach the goal point of $(2, 2)$, an opponent moves from $(0, 0)$ following the action sequence $<\uparrow,\uparrow,\rightarrow,\rightarrow>$ by four steps (Cartesian coordinates). There are also 5 other action sequences, \textit{i.e.,} $<\uparrow,\rightarrow,\uparrow,\rightarrow>, <\uparrow,\rightarrow,\rightarrow,\uparrow>, <\rightarrow,\uparrow,\uparrow,\rightarrow>, <\rightarrow,\uparrow,\rightarrow,\uparrow>, <\rightarrow,\rightarrow,\uparrow,\uparrow>$, that can lead to the same goal. Obviously, the complexity of the action sequence is much higher than the goal itself.* Similar conclusions are also verified by the experiments in Figure 3, and the corresponding analysis is given in Appendix A.1. [1] Georgios Papoudakis, Filippos Christianos, and Stefano Albrecht. Agent modelling under partial observability for deep reinforcement learning. Advances in Neural Information Processing Systems, 34:19210–19222, 2021. [2] Georgios Papoudakis and Stefano V Albrecht. Variational Autoencoders for Opponent Modeling in Multi-Agent Systems. arXiv preprint arXiv:2001.10829, 2020. [3] Haobo Fu, Ye Tian, Hongxiang Yu, Weiming Liu, Shuang Wu, Jiechao Xiong, Ying Wen, Kai Li, Junliang Xing, Qiang Fu, et al. Greedy when sure and conservative when uncertain about the opponents. In International Conference on Machine Learning, pages 6829–6848. PMLR, 2022. > Q2: "...bridge the information gap between agents". This concept of information gap appeared out of nowhere. It is not explained either. What does this even mean? It needs to be clarified. In Multi-Agent Reinforcement Learning (MARL), "bridging the information gap between agents" refers to enhancing communication and information sharing among agents. This process involves reducing the uncertainty or lack of information each agent has about the environment, as well as the states, actions, or intentions of other agents. By effectively bridging this gap, agents can make more informed decisions, coordinate better, and ultimately achieve more optimal outcomes in their collective tasks. The following literature [1] explores the importance and impact of information sharing among agents. [1] Tan, M. . "Multi-Agent Reinforcement Learning : Independent vs. Cooperative Agents." Proc. of 10th ICML (1993). > Q3: "Autonomous agents, different from those jointly trained, can act autonomously in complex and dynamic environments...". You seem to be conflating autonomy with decentralized training or being self-interested here. The definition of autonomy does not preclude centralised training. I can have a set of autonomous agents train with privileged information, yet deploy them to act autonomously. We will reorganize the discussion on "Autonomous agents and jointly trained" to eliminate any ambiguity. The aim here is to distinguish our method from those that are "jointly trained," as such methods struggle to generalize against opponents with varying policies, as results in Appendix C. > Q4: "Although a lot of the existing methods concentrate on modeling the opponent’s actions, we argue that such an approach is short-sighted, pedantical, and highly complex.". Again, lots of sweeping and strong statements with no backing or clarification. What does short-sighted, pedantical, and highly complex mean? This does not sound professional or scientific. If you are going to actually complain about an entire line of literature, you need to make your statement more concrete. These words "short-sighted, pedantical, and highly complex" are used to describe the weakness of "predicting action" in opponent modeling, which is also the advantage of our proposed "predicting subgoals". For example, A and B play the prophecy game, A can predict the next step of other agents, and B can predict the next 5 steps. Obviously, A is short-sighted compared with B. We give intuitive examples that illustrate the advantages of predicting subgoals. That example is detailed in Sec 1 Para 3. Similar conclusions are also verified by the experiments in Figure 3, and the corresponding analysis is given in Appendix A.1. --- Rebuttal 3: Title: Rebuttal (Part Ⅲ) Comment: ## Part Ⅲ > Q5: "Generally, modeling an opponent’s actions just predicts what it will do at the next step. Intuitively, it is more beneficial for the agent to make decisions if it knows the situation of the opponent several steps ahead". This statement seems to ignore the trajectory prediction literature entirely, and tries to get away with it by saying "generally". To the best of our knowledge, current literature on plan recognition [1, 2, 3], including the trajectory prediction methods you mentioned, requires substantial prior knowledge, such as hierarchical plan libraries or domain models. These methods, which depend on prior knowledge, are outside the scope of our discussion. [1] Blaylock, N., Allen, J., 2006. Fast hierarchical goal schema recognition. In: Proceedings of the 21st AAAI National Conference on Artificial Intelligence. pp. 796–801. [2] Sohrabi, S., Riabov, A., Udrea, O., 2016. Plan recognition as planning revisited. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence. pp. 3258–3264. [3] Vered, M., Kaminka, G., 2017. Heuristic online goal recognition in continuous domains. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. pp. 4447–4454. [4] Tian, X., Zhuo, H., Kambhampati, S., 2016. Discovering underlying plans based on distributed representations of actions. In: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems. pp. 1135–1143. > Q6: "Other methods that claim to predict the opponent’s goal [28 , 29], but without explicitly making a connection to the opponent’s goal...". What does this even mean? Also, see the inverse planning / goal recognition works cited in Weaknesses and their references/citations for works that predict goals explicitly. In the domain of inverse planning, goals are indeed explicitly predicted. We want to emphasize that this paper discusses methods for predicting an opponent's goals within the context of opponent modeling. While both inverse planning and opponent modeling involve understanding other agents' behaviors, inverse planning focuses on inferring goals and intentions, whereas opponent modeling is concerned with predicting opponent behavior and gaining an advantage in the scenario. > Q7: "Unlike these methods, in this paper, we consider the most common...". I would argue that the setting where other autonomous agents are also learning and adapting to our agent is the more common setting, and the fixed opponent policy is a big simplification of it. This can be justified in certain cases, but it certainly is not the most common, except that it is more common in literature because it is simpler. But if the harder and more realistic problem is already being tackled, I do not see why this point is a plus. Thank you for your suggestion. We acknowledge that the term "most common" might be ambiguous, and we will revise it to avoid confusion. In reality, due to the high costs of continuous learning, it is rare for deployed agents to continually learning and adapting. Instead, periodic updates are more common. We believe that the setting where opponents have unseen, diverse, but fixed policies during testing is more prevalent at present. > Q8: The $\pi^o, a^o$ notation is confusing. I would recommend using the pre-established notational norms in game theory / MARL, $\pi^{-i}, a^{-i}$ for all agents except $i$. Thank you for your suggestion. We will revise it in the revision. > Q9: "Opponent modeling typically predicts the actions of other agents to address the non-stationary problem." At this point, I am a little confused. You have said "Unlike these methods, in this paper, we consider the most common setting where opponent s have unseen, diverse, but fixed policies during test." So the opponents have fixed policies during test. Then there is absolutely no non-stationarity during test time. In fact, since is fixed within a test episode, the problem the agent is facing in test time is in fact an MDP. So you are only dealing with the setting where at each episode the agent might be facing a new MDP. How is this different from online single-agent reinforcement learning then? Firstly, the sources of non-stationarity differ. In single-agent RL, non-stationarity arises from changes in the environment. In MARL, an additional challenge is the uncertainty from opponents' policies. These different sources lead to varying degrees and forms of non-stationarity, which should not be conflated. Secondly, online reinforcement learning is a broader concept. In our setting, the agent must quickly adapt to opponents' policy changes, which aligns with the goal of online RL to enable agents to adapt rapidly. --- Rebuttal Comment 3.1: Comment: The response only partially address some of reservations. I choose to increase my score to acknowledge this, however I still do not think the paper is ready for publication and _at least_ must have a strong writing revision. --- Reply to Comment 3.1.1: Title: Response to reviewer fQcX (Part Ⅰ) Comment: Thank you for your acknowledgement of the previous reply. Openreview is currently unable to continue uploading the revision version. We have included the major revisions in this reply. Sec 1 Para 1-2: Autonomous agents are systems capable of making decisions and acting independently in their environment, often operating without direct human intervention [3]. These agents can either cooperate with or compete against each other, depending on the context. In cooperative scenarios, many multi-agent reinforcement learning (MARL) methods [18,36,30,34] aim to bridge the information gap between agents [49] by training agents in a centralized manner, called centralized training with decentralized execution, enabling agents to work together seamlessly to accomplish cooperative tasks. Alternatively, fully decentralized methods[15,35] seek to break free from the constraints of centralized training, allowing agents to reach collaboration in a simpler and decentralized manner. In competitive scenarios, NFSP[13], PSRO[17], and DeepNash[26] employ self-play to train agents for equilibrium strategies, allowing agents to adapt and improve their policy. By considering how the agent affects the expected learning progress of other agents, LOLA[9] and COLA[44] apply opponent shaping to this setting. Overall, these methods focus on training agents in a way that accounts for their interactions, resulting in a set of policies that enable effective collaboration or competition within a group of agents. While the above methods emphasizes the collective behavior of agents, it is also crucial to consider the role of individual agents, particularly self-interested agents, in these multi-agent environments. A self-interested agent[50,51] operates with the primary goal of maximizing its own benefits, even when interacting with other agents. When the objectives of a self-interested agent align with those of the team, this scenario falls under ad-hoc teamwork[52,53,54]; however, in more general cases, these interactions are framed as noncooperative games [56,55]. A key technique for self-interested agents in such settings is *opponent modeling*[3,57], which enables them to analyze and predict the actions, goals, and beliefs of other agents. By modeling the intentions and policies of other agents, the training process of the agent might be stabilized [24]. Many studies rely on predicting the actions [12,14,11,22,23], goals [29,28], and returns [37] of opponents during training. Then, the autonomous agent can adapt to different or unseen opponents by using the predictions or representations that are produced by the relevant modules. References [1-48] References from the original text. [49] Tan, M. . "Multi-Agent Reinforcement Learning : Independent vs. Cooperative Agents." Proc. of 10th ICML (1993). [50] Shoham, Yoav, and Kevin Leyton-Brown. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press, 2008. [51] Gintis, Herbert. "Modeling cooperation among self-interested agents: a critique." The journal of socio-economics 33.6 (2004): 695-714. [52] Melo FS, Sardinha A. Ad hoc teamwork by learning teammates’ task. Autonomous Agents and Multi-Agent Systems. 2016 Mar;30:175-219. [53] Chen S, Andrejczuk E, Irissappane AA, Zhang J. ATSIS: achieving the ad hoc teamwork by sub-task inference and selection. InProceedings of the 28th International Joint Conference on Artificial Intelligence 2019 Aug 10 (pp. 172-179). [54] Stone P, Kaminka G, Kraus S, Rosenschein J. Ad hoc autonomous agent teams: Collaboration without pre-coordination. InProceedings of the AAAI Conference on Artificial Intelligence 2010 Jul 5 (Vol. 24, No. 1, pp. 1504-1509). [55] Russell, Stuart J., and Peter Norvig. Artificial intelligence: a modern approach. Pearson, 2016. [56] Nash, John F. "Non-cooperative games." (1950). [57] Nashed, Samer, and Shlomo Zilberstein. "A survey of opponent modeling in adversarial domains." Journal of Artificial Intelligence Research 73 (2022): 277-327. --- Reply to Comment 3.1.2: Title: Response to reviewer fQcX (Part Ⅱ) Comment: Sec 2 Para 6: Plan Recognition[58] involves understanding and predicting hidden aspects of an observed entity's trajectory, such as its goals, plans, and underlying policies. Among its key methods are inverse planning [59,60], which emphasizes deducing the decision-making process, and goal recognition [61], which focuses on predicting the ultimate goal or desired final state. DUP[62] approaches plan recognition by using distributed representations of actions to discover plans not found in existing plan libraries. Some works focus on improving the ability of goal recognition instead of RL policies[63,64]. PRP[65] uses model-based algorithm to handle unreliable observations and recognize plans. Unlike existing plan recognition methods, our method aims to enhance policy by opponent modelling, specifically within multi-agent scenarios where no prior knowledge is available. References [58] Carberry, Sandra. "Techniques for plan recognition." User modeling and user-adapted interaction 11 (2001): 31-48. [59] Zhi-Xuan T, Mann J, Silver T, Tenenbaum J, Mansinghka V. Online bayesian goal inference for boundedly rational planning agents. Advances in neural information processing systems. 2020;33:19238-50 [60] Ying L, Zhi-Xuan T, Mansinghka V, Tenenbaum JB. Inferring the goals of communicating agents from actions and instructions. InProceedings of the AAAI Symposium Series 2023 (Vol. 2, No. 1, pp. 26-33). [61] Blaylock, N., Allen, J., 2006. Fast hierarchical goal schema recognition. In: Proceedings of the 21st AAAI National Conference on Artificial Intelligence. pp. 796–801. [62] Tian, X., Zhuo, H., Kambhampati, S., 2016. Discovering underlying plans based on distributed representations of actions. In: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems. pp. 1135–1143. [63] Vered, M., Kaminka, G., 2017. Heuristic online goal recognition in continuous domains. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. pp. 4447–4454. [64] Shvo M, McIlraith SA. Active goal recognition. InProceedings of the AAAI Conference on Artificial Intelligence 2020 Apr 3 (Vol. 34, No. 06, pp. 9957-9966). [65] Sohrabi, S., Riabov, A., Udrea, O., 2016. Plan recognition as planning revisited. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence. pp. 3258–3264. Other minor modifications are not listed in detail here, such as the use of symbols $\pi^o, a^o \rightarrow \pi^{-i}, a^{-i}$. --- Reply to Comment 3.1.3: Title: Response to reviewer fQcX Comment: Has my latest response addressed your concerns? If you have any further questions, please let me know. Your feedback is very important to us. Thank you again for your review. --- Reply to Comment 3.1.4: Title: Response to reviewer fQcX Comment: We apologize for not receiving your feedback before the discussion ended. We'd like to share our responses to your concerns once more. **Originality:** Our method introduces a new framework in opponent modeling. While modeling opponent' subgoal may seem intuitive, it differs significantly from previous methods and from works in plan recognition, inverse planning, and goal recognition. **Quality:** We've addressed the potential ambiguities you pointed out and enriched the references to relevant literature. Details are in our second response. We hope this clarifies our work, and thank you again for your thoughtful review.
Summary: This paper introduces a multi-agent reinforcement learning algorithm focused on opponent modelling through subgoal inference, termed Opponent Modelling based on Subgoal Inference (OMG). Unlike traditional models that predict immediate actions of opponents, OMG leverages historical trajectories to infer an opponent's future subgoals. This method is designed to enhance generalisation across various unseen opponent policies and includes distinct mechanisms for cooperative and general-sum games. The authors report that their approach outperforms traditional action prediction models in several multi-agent benchmarks. Strengths: 1. The approach of predicting subgoals instead of immediate actions may lead to more robust multi-agent systems. Predicting an opponent's short-term actions tends to provide limited information compared to understanding their long-term strategies. 2. The paper is well-structured, with a clear exposition of the problem, detailed methodology, and coherent discussion of results. Diagrams and formulas are appropriately used to aid understanding. 3. By focusing on subgoals, OMG potentially offers a more scalable and generalizable approach to opponent modelling in multi-agent systems, which could be beneficial for complex scenarios. Weaknesses: 1. “The subgoal prior model, denoted as $p_{\psi}$, is a pre-trained variational autoencoder (VAE) using the states previously collected in the environment”: What policy was used to collect these states to pre-train the prior model? This policy will have a huge impact on the coverage of the state space, and hence the subgoal inference model and final policy of OMG. 2. I have concerns about the generalizability and scalability of this approach. The authors test their approach on SMAC as a complex domain with one easy map (8m) and one medium map (3s_vs_5z). A number of SMAC maps have been shown to be solvable by independent learning methods such as IQL [1] and IPPO [2]. Can the authors showcase performance on hard SMAC maps such as 6h_vs_8z, corridor, 27m_vs_30m which requires learning more complex strategies and have a higher number of agents? 3. Experimental Concerns: 3a. Why do the authors use different baselines for each environment i.e. D3QN, PPO, and IQL for Foraging, Predator-Prey, and SMAC respectively? Each of these baselines have been used for these environments. [4] 3b. What variant of PPO was used for the Predator-Prey environment? Both IPPO [2] with individual agent critics and MAPPO [3] with centralised critics have shown to exhibit strong performance in a lot of cooperative benchmarks including predator-prey. 3c. It would be helpful to assess the impact of the approach if the authors can compare their approach to state-of-the-art CTDE techniques in MARL i.e. QMIX. [1] QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning [2] Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge? [3] The Surprising Effectiveness of PPO in Cooperative, Multi-Agent Games [4] Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could the authors provide more insight into how the subgoal predictor's performance might degrade or improve when scaling to environments with significantly more complex or numerous agents? 2. Are there specific types of multi-agent environments where OMG might not perform as expected? What limitations in the model's assumptions should be considered? 3. How does the model handle highly dynamic environments where the opponent's strategies evolve more rapidly than the model's training updates? 4. Please see weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: While the authors have discussed the technical validation of their model, there could be more emphasis on practical limitations, such as computational overhead and real-time applicability in fast-paced environments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging our novel contributions as well as raising valuable questions. > ''The subgoal prior model, denoted as $p_\psi$, is a pre-trained variational autoencoder (VAE) using the states previously collected in the environment'': What policy was used to collect these states to pre-train the prior model? This policy will have a huge impact on the coverage of the state space, and hence the subgoal inference model and final policy of OMG. Training OMG requires a set of policies as the training set. During the preparation of the training set, states are collected to train the prior model. The policies used in the training set are described in Appendix B, lines 494-499. As you noted, $p_\psi$, serving as an encoder, must cover the state space as thoroughly as possible. However, it does not limit the policies used to collect states, allowing for an expanded range of state collection in practice. > I have concerns about the generalizability and scalability of this approach. The authors test their approach on SMAC as a complex domain with one easy map (8m) and one medium map (3s_vs_5z). A number of SMAC maps have been shown to be solvable by independent learning methods such as IQL [1] and IPPO [2]. Can the authors showcase performance on hard SMAC maps such as 6h_vs_8z, corridor, 27m_vs_30m which requires learning more complex strategies and have a higher number of agents? We tested on 6h_vs_8z, and both QMIX and our method (OMG-optim + 5 QMIX agent) had a win rate of 0%. | | QMIX | OMG-optim + 5 homologue | |:-----:|:-----:|:-----:| | win rate | 0.0% | 0.0% | The ability to predict an opponent's actions or subgoals is not the only factor that determines performance, especially in scenarios requiring close cooperation. We are testing on 27m_vs_30m and will provide the results later. This may be challenging due to the persistent scalability issues in opponent modeling. > 3a. Why do the authors use different baselines for each environment i.e. D3QN, PPO, and IQL for Foraging, Predator-Prey, and SMAC respectively? Each of these baselines have been used for these environments. [4] The core of OMG is opponent modeling method, which does not restrict the type of underlying RL method. Therefore, we followed the original environment's setup. > 3b. What variant of PPO was used for the Predator-Prey environment? Both IPPO [2] with individual agent critics and MAPPO [3] with centralised critics have shown to exhibit strong performance in a lot of cooperative benchmarks including predator-prey. We used IPPO. The centralized critics are not applicable to the setup of this study. Because, the motivation of this paper is to address the autonomous agent through opponent modeling, rather than training a group of agents for some task. > 3c. It would be helpful to assess the impact of the approach if the authors can compare their approach to state-of-the-art CTDE techniques in MARL i.e. QMIX. We has conducted tests where QMIX acts as the agent with opponents of test set in **Appendix C**. The training paradigm employed by QMIX leads to a lack of generalization for different opponents. Using the same training methodology for QMIX as OMG leads to a degradation in IQL, which already serves as one of the existing baselines. The test results indicate that opponents trained using different methods and seeds are not homogeneous, which poses challenges for cooperation. > Could the authors provide more insight into how the subgoal predictor's performance might degrade or improve when scaling to environments with significantly more complex or numerous agents? When modeling opponents, the difficulty of prediction usually escalates in environments with more complex or numerous agents, as the prediction space expands. From a scalability standpoint, OMG offers certain advantages over traditional action prediction methods. For example, if the state space of the environment excludes agent information, OMG's prediction space does not significantly increase with the number of agents. > Are there specific types of multi-agent environments where OMG might not perform as expected? What limitations in the model's assumptions should be considered? We believe it is essential for opponents in the environment to have clear goals. An opponent with a purposeless, random policy is challenging to model, even though it might not directly impact the agent's rewards. > How does the model handle highly dynamic environments where the opponent's strategies evolve more rapidly than the model's training updates? OMG does not account for scenarios where opponents rapidly evolve during interactions. Addressing such situations requires incorporating planning ideas as discussed in [1, 2], which is left for future work. Currently, OMG addresses this by ensuring that the training set includes a diverse range of opponent policies, allowing it to generalize well even when opponents rapidly evolve during testing. [1] Zhi-Xuan T, Mann J, Silver T, Tenenbaum J, Mansinghka V. Online bayesian goal inference for boundedly rational planning agents. Advances in neural information processing systems. 2020;33:19238-50 [2] Xiaopeng Yu, Jiechuan Jiang, Wanpeng Zhang, Haobin Jiang, and Zongqing Lu. Model-based opponent modeling. Advances in Neural Information Processing Systems, 35:28208–28221, 2022. --- Rebuttal Comment 1.1: Title: Response to reviewer gEZB Comment: The test performance on 27m_vs_30m as follows: | | OMG-optim | LIAM | Naive OM | |:-----:|:-----:|:-----:|:-----:| | win rate | 47.4% ± 14.1% | 39.0% ± 9.3% | 38.6% ± 15.0% | The homogeneous agents collaborating with the baselines are trained using the QMIX algorithm, achieving a win rate of 49%. OMG outperforms these baselines and reaches a win rate close to that of the well-trained QMIX team, demonstrating its effectiveness even in environments with a larger number of opponents. --- Reply to Comment 1.1.1: Title: To reviewer gEZB Comment: Has my response addressed your concerns? If there are any remaining issues, please let me know. If everything is clear, could you consider adjusting the score? Thank you sincerely for your review. --- Rebuttal 2: Title: Thanks for the rebuttal Comment: Thank you for your rebuttal. I appreciate the additional clarifications provided and have decided to update my score, provided the additional experiments are added to the final version of the paper. --- Rebuttal Comment 2.1: Title: Response to reviewer gEZB Comment: Thank you sincerely again for your comments and appreciation.
Rebuttal 1: Rebuttal: We have added two experiments in the PDF as follows: * We have increased the number of seeds in the Predator-Prey model from 5 to 10 to reduce experimental error and enhance the reliability of our findings. * We have incorporated an ablation study on subgoal selection, where OMG-Random, OMG-1s, and OMG-3s represent subgoals selected from the opponent’s future states randomly, at the next step, and at three steps, respectively. Pdf: /pdf/ae162ecda1bc99307803a04894385a24a4a3e93e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Spherical Frustum Sparse Convolution Network for LiDAR Point Cloud Semantic Segmentation
Accept (poster)
Summary: This paper proposes a Spherical Frustum Sparse Convolution Network to address the challenge of LiDAR point cloud semantic segmentation. Traditional approaches often project point clouds into 2D images and apply 2D convolutional neural networks (CNNs) or vision transformers, leading to quantization information loss due to the overlap of multiple points projected onto the same 2D location. To overcome this limitation, the paper introduces a novel spherical frustum structure that allows for direct processing of the 3D point cloud data, preserving spatial and geometric information. The proposed Spherical Frustum Sparse Convolution Network achieves more accurate semantic segmentation of LiDAR point clouds, offering a promising approach for robot perception and scene understanding. Strengths: 1. **Direct 3D Processing**: The Spherical Frustum Sparse Convolution Network processes the LiDAR point cloud data directly in 3D space, preserving the spatial and geometric information of the points. This avoids the information loss that occurs in traditional 2D projection-based methods. 2. **Reduced Quantization Loss**: By operating directly on the 3D point cloud, the proposed network significantly reduces quantization loss compared to projection-based approaches. This leads to more accurate semantic segmentation results. 3. **Efficient Sparse Convolution**: The network employs sparse convolution, which is tailored for sparse data such as point clouds. This allows for efficient computation and reduced memory usage, enabling the network to handle large-scale point cloud data. 4. **Spherical Frustum Structure**: The novel spherical frustum structure enables the network to capture contextual information from different orientations and distances, enhancing its ability to identify objects and regions in the point cloud. 5. **Improved Performance**: Experimental results show that the proposed Spherical Frustum Sparse Convolution Network achieves superior performance compared to existing methods on various LiDAR point cloud datasets, demonstrating its effectiveness for semantic segmentation tasks. 6. **Flexible and Extensible Framework**: The network framework is flexible and can be easily extended to incorporate additional components or techniques, providing opportunities for further improvements and adaptability to different applications. Weaknesses: 1. **Lack of Conceptual Novelty**: The use of spherical structures, while effective, appears to have been partially explored in SphereFormer. This limits the novelty of the proposed SFCNet in terms of the core concept. 2. **Performance Gap**: Based on the information provided, the performance of SFCNet for semantic segmentation of LiDAR point clouds lags far behind that of SphereFormer, whose radial window transformer structure may achieve higher recognition accuracy, especially for distant objects. 3. **Potential for Further Optimization**: Since SFCNet based on 2D projections lags behind SphereFormer based on 3D voxels, there may be room for optimization in terms of data input or network architecture. This paper does not discuss potential future avenues to address this performance gap or to work on different application scenarios. Technical Quality: 3 Clarity: 3 Questions for Authors: Despite the fact that the paper is full of content, it always gives a discomfort in the presentation. Obviously, the authors are requested to resize Figure 1, remove the captions in the figures of Figures 2 and 3 and optimize their presentation. It is recommended that the internal spacing of Table 1 be increased appropriately. The authors are invited to review more carefully. In addition, similar to the work on semantic segmentation of large scenes, the authors should compare it with state-of-the-art methods such as DRQNet (ACMMM'2023) and JoSS (INFFUS'2024). Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper proposes a method SFCNet for point cloud semantic segmentation by directly projecting 3D point clouds into 2D spherical frustums, the main limitation of which lies in the large performance gap. Of course, the proposed method still has a large potential in specific scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable suggestions for our paper. We are encouraged by your confirmation of the information preservation, efficiency, contextual information capture ability, performance improvement, network flexibility, and the **"large potential in specific scenarios"** of our SFCNet. The following are our responses to your concerns: --- ### Q1: Conceptual Novelty. The conceptual novelty of our proposed method is solving the quantized information loss during the 2D projection. To address the challenge, we propose the spherical frustum structure to preserve all the points projected onto the same 2D location and the Spherical Frustum sparse Convolution (SFC) and Frustum Farthest Point Sampling (F2PS) to ensure efficiency under the representation of spherical frustum structure. In contrast, the conceptual novelty of SphereFormer [1] is expanding the receptive field of long-range objects by proposing the radius window transformer. Thus, our conceptual novelty is different from SphereFormer. --- ### Q2: Performance Gap with SphereFormer. The performance gap between our SFCNet and SphereFormer mainly results from the different representations of the point cloud. SphereFormer adopts the 3D voxel representation and utilizes the 3D convolutional backbone to obtain effective 3D features of the point cloud. Based on the 3D features, it proposes the radius window transformer to further expand the receptive field and enhance the segmentation performance. In this paper, we mainly focus on solving the quantized information loss of the 2D projection-based methods. Thus, we conduct the experiments based on the 2D convolutional backbone, which results in the performance gap with SphereFormer based on the 3D convolutional backbone. However, we realize a smaller performance gap with the 3D methods compared to the other 2D projection-based methods. ---------- ### Q3: Future Avenues for Addressing Performance Gap. In this paper, we address the quantized information loss for the 2D projection-based methods. Thus, our method provides the information-lossless mechanism for future work to apply the latest research results of image feature learning to the field of LiDAR point cloud semantic segmentation. The future avenues to narrow the performance gap with 3D voxel-based methods are combining the latest image feature learning network architecture, like vision mamba [2], with our information-lossless spherical frustum structure. ---------- ### Q4: Addressing the Discomfort in the Presentation. We apologize for causing your discomfort due to our representation. We will polish our presentation according to your valuable suggestions. Specifically, we will resize Figure 1 to a suitable size, remove the captions inside Figures 2 and 3, adjust the contents of Figures 2 and 3, and increase the internal spacing of Table 1 in the final version of our paper. ---------- ### Q5: Comparison with the State-of-The-Art Methods DRQNet and JoSS. We compare our SFCNet with the state-of-the-art methods DRQNet [3] and JoSS [4] as follows. We will add these comparisons and discussions in the final version of our paper. - **DRQNet:** DRQNet proposes the dual representation query strategy and representation selective dependency module for weakly-supervised LiDAR point cloud semantic segmentation. The proposed modules of DRQNet improve the feature aggregation and fusion in the dual feature space of the point cloud and construct a stronger self-supervised signal for weakly-supervised learning. Compared to DRQNet, our method mainly focuses on proposing an efficient information-lossless data representation and supervised learning method of geometric information-only LiDAR point cloud. The weakly-supervised learning architecture based on the spherical frustum structure is one valuable direction for our future work. - **JoSS:** JoSS proposes the cross-modal transformer-based feature fusion method to adopt the cross-attention mechanism for better information fusion. In addition, unimodal data augmentation is proposed in JoSS for point-level contrastive learning. Compared to JoSS, SFCNet does not adopt the RGB image for multi-modal feature fusion and 3D segmentation backbone for point cloud feature extraction. Thus, SFCNet shows a weaker segmentation performance compared to JoSS on the SemanticKITTI test set. However, SFCNet overcomes the quantized information loss of the 2D projection-based method and shows potential to be adopted in applying the future stronger 2D backbone to the field of LiDAR point cloud semantic segmentation. ---------- ### Reference [1] X. Lai _et al._, ‘Spherical Transformer for LiDAR-based 3D Recognition’, CVPR, 2023. [2] L. Zhu _et al._, ‘Vision mamba: Efficient visual representation learning with bidirectional state space model’, ICML, 2024. [3] J. Liu _et al._, ‘Exploring Dual Representations in Large-Scale Point Clouds: A Simple Weakly Supervised Semantic Segmentation Framework’, ACMMM, 2023. [4] Y. Wu _et al._, ‘Joint Semantic Segmentation using representations of LiDAR point clouds and camera images’, Information Fusion, 2024. ---------- We hope our response can address your concerns about the conceptual novelty, performance gap with SphereFormer, and presentation discomfort. If you have further problems, please let us know. --- Rebuttal Comment 1.1: Title: Response from Reviewer Comment: Thanks to the reviewers for their serious response! A lot of my questions and concerns have been addressed. I hope the authors will make the appropriate changes on the revised version of the paper. In return, I will further improve my rating. --- Reply to Comment 1.1.1: Comment: Thank you very much for your time and your recognition of our method and response. We promise to make the appropriate changes, including adding the discussions, revising our presentation according to your suggestions, etc., on the revised version of our paper.
Summary: This paper introduces SFCNet, a spherical frustum sparse convolution network designed for semantic segmentation of LiDAR point clouds. Traditional 2D projection methods suffer from quantized information loss when multiple points project onto the same pixel, leading to sub-optimal segmentation. To address this, the authors propose a spherical frustum structure that retains all points within the same pixel by using a hash-based representation. This structure is then utilized in spherical frustum sparse convolution, which considers both neighboring frustums and the nearest point within each frustum for convolution. Additionally, Frustum Farthest Point Sampling (F2PS) is used to sample points stored in spherical frustums. Experimental results on the SemanticKITTI and nuScenes datasets demonstrate that SFCNet outperforms conventional 2D projection-based methods, particularly in the segmentation of small objects, while maintaining complete point cloud information. Strengths: 1. The paper is well-written and easy to follow. The introduction to the Spherical Frustum Sparse Convolution and Frustum Farthest Point Sampling is clear and comprehensible. It provides a clear narrative structure supported by comprehensive figures and tables. The experimental settings are reasonable and well-documented. 2. The motivation to address quantized information loss during projection is well-founded. Weaknesses: 1. The experimental results do not demonstrate a clear superiority of the proposed method. While the SFCNet shows some mIoU improvements over prior projection methods on the SemanticKITTI and nuScenes datasets, these gains are marginal. Furthermore, the performance of SFCNet is still significantly below that of SOTA 3D voxel-based methods. 2. The paper does not provide an analysis of the computational efficiency of the proposed method. Given that SFCNet involves specific point sampling for convolution, assessing the computational efficiency is crucial to understand its practical significance and potential trade-offs in real-world applications. Technical Quality: 3 Clarity: 3 Questions for Authors: In Table 1, it is clear that 3D voxel-based methods achieve significantly better results compared to point-based and 2D projection-based methods. Does this suggest that 3D voxel-based methods are inherently superior? Could you provide a comparison of the computational efficiency of SFCNet against other baseline methods? This information is crucial to evaluate the practical applicability and performance trade-offs of SFCNet. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No limitations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable suggestions. We are encouraged by your confirmation of the presentation of our paper and our motivation for overcoming quantized information loss. The following are our responses to your concerns: --- ### Q1: Performance Gain Compared to Prior 2D Projection-based Methods. Though the performance gains of the mIoU metric are marginal, SFCNet achieves significant performance improvement on small objects compared to the previous 2D projection-based methods as shown in Tables 1 and 2 in the main manuscript, especially on the human, motorcycle, etc. The improvement mainly results from overcoming quantized information loss by the proposed spherical frustum structure. The better performance of small objects shows the significant practical value of SFCNet for safe autonomous driving. --- ### Q2: Performance Gap between SFCNet and 3D Voxel-based Methods. As shown in the common response to you in Q2 of Author Rebuttal, SFCNet can realize the trade-off between efficiency and performance and thus has a realistic value in specific scenarios. In addition to the realistic contribution, SFCNet also builds a foundation for future works on the 2D projection-based methods since our work provides the spherical frustum structure to overcome the quantized information loss during 2D projection. At the beginning of deep learning-based point cloud semantic segmentation, the 2D projection-based method is the pioneer that shows reasonable performance for semantic segmentation of outdoor LiDAR point cloud since it can transfer the success of image semantic segmentation to the field of point cloud semantic segmentation. Therefore, though 3D voxel-based methods have better performance than 2D projection-based methods recently, it does not mean the 2D projection-based method is inherently weaker than 3D voxel-based network. If a strong image semantic segmentation backbone is proposed, the proposed spherical frustum structure can be adopted to apply the strong backbone to the field of LiDAR point cloud semantic segmentation without quantized information loss. --- ### Q3: Analysis of Computational Efficiency. As shown in the common response to you in Q1 of Author Rebuttal, SFCNet shows better computational efficiency against the other baseline methods. Though we have a specific point sampling method, the computational complexity of our point sampling method Frustum Farthest Point Sampling (F2PS) is linear as shown in the L203 in the main manuscript. Thus, both the theoretical and experimental analysis show our method is computationally efficient and shows significant potential for practice application. --- We hope our response can address your concerns about the performance and computational efficiency between our SFCNet and the other baseline methods. If you have further problems, please let us know.
Summary: The paper introduces a Spherical Frustum Sparse Convolution Network (SFCNet), a novel approach for LiDAR point cloud semantic segmentation that addresses the quantized information loss existing in 2D projection-based methods. By utilizing a spherical frustum structure, the SFCNet preserves all points within a 2D projection area, thus maintaining the complete geometric structure of the data. This model improves the segmentation accuracy, particularly for small objects, through the innovative use of hash-based spherical frustum representation and a custom sampling method called Frustum Farthest Point Sampling (F2PS). Extensive evaluations on the SemanticKITTI and nuScenes datasets demonstrate superior performance over existing methods. Strengths: S1. Originality of Spherical Frustum Structure. The spherical frustum structure is a novel approach that effectively preserves all the points within a given 2D projection area, mitigating the common issue of quantized information loss. S2. Efficiency of Representation. The proposed hash-based spherical frustum representation ensures memory efficiency and computational feasibility. S3. Performance Boost. The enhanced accuracy in segmenting small objects and maintaining geometric integrity significantly contributes to the field, especially for applications in autonomous driving and robotic navigation. Weaknesses: W1. Performance Decrease on Large Classes. While SFCNet introduces significant advancements in semantic segmentation of LiDAR data, the model demonstrates slightly weaker performance on wide-range classes, such as roads and parking areas. This suggests that while the model excels in handling small object segmentation, its techniques might not be as effective for broader landscapes or larger objects. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1a. Performance Change on Large Classes. Could the authors provide insights into why SFCNet shows slightly weaker performance on wide-range classes, such as roads and parking areas (see also W1)? Q1b. Maintaining performance on large Classes. Are there any thoughts how the model could improve the small classes while maintaining performance on larger classes? Q2. Performance Comparison toward tail-end of Distribution. Given a different performance w.r.t. class sizes, additional tests on accuracy w.r.t. data changes might be helpful. How does SFCNet adapt its segmentation accuracy in non-standard environments (like in night settings, heavy rain, etc.)? Insights here could greatly influence its applicability in real-world settings. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Investigating how the system performs in unpredictable, non-standad settings would offer a clearer picture of its real-world readiness. The paper can also benefit from a deeper exploration of SFCNet's slightly weaker performance on wide-range classes such as roads and parking areas. Understanding the factors that contribute to these discrepancies could lead to improvements in the model's overall accuracy and applicability in real-world scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable suggestions. We are encouraged by your confirmation of our spherical frustum structure, efficiency, and performance improvement on small objects. The following are our responses to your concerns: --- ### Q1: Insights and Improvement Direction on Performance of Wide-Range Classes. - **Discussion on Performance:** To implement the 2D convolution on the spherical frustum, only the nearest points in the neighbor spherical frustums are adopted in the spherical frustum sparse convolution. This design may limit the receptive field of the network and thus result in a slightly weaker performance of the wide-range classes. - **Improvement Direction:** The performance improvement on small classes mainly results from the proposed spherical frustum structure, which overcomes quantized information loss and preserves the complete geometry structure. Thus, to maintain the performance on both the wide-range classes and small classes, the direction is to expand the receptive field based on our spherical frustum structure. To realize this, future work can lie in combining the vision network architecture with a larger receptive field, like the vision transformer [1] or vision mamba [2], with our spherical frustum structure. --- ### Q2: Additional Tests on Accuracy w.r.t Data Changes. We adopt two methods to change the data size, including increasing instance class sizes by instance augmentation [3] and decreasing labeled class sizes using the ScribbleKITTI [4] dataset. The number of instances before and after increasing instance class sizes are shown in the first table below. In addition, the second table below shows how many labeled million points belong to each class in the original and scribble datasets. || car | bicycle | motorcycle | truck | other-vehicle | person | bicyclist | motorcyclist | | -| ------ | ------- | ---------- | ----- | ------------- | ------ | --------- | ------------ | | Before Increasing | 149531 | 4231 | 2835 | 2543 | 7072 | 7287 | 1390 | 532| | After Increasing | 149575 | 5809 | 4387 | 5846 | 8433 | 8208 | 5437 | 4535 | || car | bicycle | motorcycle | truck | other-vehicle | person | bicyclist | motorcyclist | road | parking | sidewalk | other-ground | building | fence | vegetation | trunk | terrain | pole | traffic-sign | | -------- | ---- | ------- | ---------- | ----- | ------------- | ------ | --------- | --| ----- | ------- | -------- | ------------ | -------- | ----- | ---------- | ----- | ------- | ---- | ------------ | | Original | 99.4 | 0.392 | 0.94 | 4.59 | 5.46 | 0.817 | 0.299 | 0.088 | 467.1 | 34.6 | 338.2 | 9.17 | 311.8 | 170.0 | 627.2 | 14.2 | 183.6 | 6.71 | 1.44 | | Scribble | 6.57 | 0.07 | 0.18 | 0.24 | 0.43 | 0.16 | 0.05 | 0.02 | 16.1 | 1.93 | 24.7 | 0.895 | 25.5 | 38.5 | 60.5 | 2.76 | 9.15 | 1.35 | 0.27 | The experiments are conducted on the two cases. As shown in the table below, increasing the instance class sizes can improve the segmentation performance, since more instances of the rare classes are added to ease the long-tailed problem. As for decreasing the labeled class sizes, the performance also decreases since the supervision is weaker with fewer labels. | Method| mIoU (%) $\uparrow$ | | -| -| | SFCNet (Ours)| 62.9 | | SFCNet (Ours) w/ increment of instance class sizes | 63.2 | | SFCNet (Ours) w/ decrease of whole class sizes | 56.0 | --- ### Q3: How SFCNet Adapts Accuracy in Non-Standard Environments. Robo3D [5] dataset provides the LiDAR point cloud adding simulated noises in non-standard environments and corresponding semantic labels on the SemanticKITTI validation set. However, the night and heavy rain environments are not provided by Robo3D. We do not find any other datasets that provide the data in the night and heavy rain environments modified from the SemanticKITTI dataset. Thus, to fairly test how our SFCNet and the baseline model adapt accuracy in the non-standard environments with the same point cloud data, the snow and wet ground environments on the Robo3D dataset are adopted. Actually, the snow and wet ground are similar to the dropping rain and wet ground in the heavy rain environment. Meanwhile, the night environment does not cause noise in the LiDAR point cloud since the LiDAR sensor captures the range information by actively sending the LiDAR ray. As shown in the table below, when tested on noisy non-standard environments, the performance of our SFCNet decreases. However, it still has a better performance than the baseline model due to overcoming the quantized information loss. These results indicate the robustness of our method to overcome quantized information loss in various real-world settings. | Method | mIoU (%) $\uparrow$ (Snow) | mIoU (%) $\uparrow$ (Wet Ground) | | ------------- | -------------------------- | -------------------------------- | | Baseline | 44.2| 53.9| | SFCNet (Ours) | 47.5| 55.8| --- ### Reference [1] Z. Liu _et al._, ‘Swin transformer: Hierarchical vision transformer using shifted windows’, ICCV, 2021. [2] L. Zhu _et al._, ‘Vision mamba: Efficient visual representation learning with bidirectional state space model’, ICML, 2024. [3] Z. Zhou _et al._, "Panoptic-polarnet: Proposal-free lidar point cloud panoptic segmentation.", CVPR, 2021. [4] O. Unal _et al._, ‘Scribble-supervised lidar semantic segmentation’, CVPR, 2022. [5] L. Kong _et al._, ‘Robo3d: Towards robust and reliable 3d perception against corruptions’, ICCV, 2023. --- We hope our response can address your concerns about the performance on non-standard environments, accuracy testing with data changes, and the reasons and improvement direction on the performance of wide-range classes. If you have further problems, please let us know. --- Rebuttal Comment 1.1: Comment: Many thanks for the clarification which addressed all my open questions! I would really encourage the authors to include a discussion along Q1 into the paper to illustrate the limitations of the current approach, too. This may also help to emphasize the Conceptual Novelty in a more dialectic way (c.f. reviewer BSNT). --- Reply to Comment 1.1.1: Comment: Thank you very much for your time and your recognition of our method and response. We promise to include the discussion of Q1 in the final version of our paper. Your valuable suggestions really help us improve the illustration of our conceptual novelty in a more dialectical way.
Summary: This paper introduces a novel spherical frustum structure for LiDAR point cloud semantic segmentation, addressing the quantized information loss in 2D projection-based methods. By preserving all points within frustums and employing a memory-efficient hash-based representation, the Spherical Frustum sparse Convolution Network (SFCNet) achieves superior segmentation accuracy, particularly for small objects. The proposed Frustum Farthest Point Sampling (F2PS) method ensures uniform sampling and computational efficiency. Extensive experiments on the SemanticKITTI and nuScenes datasets demonstrate SFCNet's improved performance over existing methods. Strengths: - **Innovation in Data Representation**: The spherical frustum structure preserves all points, avoiding quantized information loss and improving segmentation accuracy, especially for small objects. - **Memory Efficiency**: The hash-based representation efficiently stores spherical frustums, minimizing memory usage compared to dense grid storage. - **Improved Sampling Method**: The Frustum Farthest Point Sampling (F2PS) method ensures uniform point cloud sampling, enhancing the retention of key information while maintaining computational efficiency. Weaknesses: - The proposed methods, including the spherical frustum structure and hash-based representation, add layers of complexity that might pose challenges for practical implementation. Interesting to see the analyze on memory usage, parameter size, and inference time. - The paper focuses primarily on comparisons with 2D projection-based methods. While it mentions the performance gap with 3D voxel-based methods, a more detailed analysis and direct comparison with state-of-the-art 3D voxel-based approaches would provide a clearer picture. - The paper lacks a sensitivity analysis of key parameters, such as the stride size in the F2PS, the number of points in the spherical frustums, and the impact of different hash table configurations, etc. Understanding how sensitive the model performance is to these parameters would be valuable for tuning the model for specific applications. - LiDAR data often contains noise and outliers. The robustness of the proposed method to such imperfections is not thoroughly evaluated. Experiments that introduce varying levels of noise and assess the impact on segmentation performance would be beneficial in demonstrating the method’s robustness. - The qualitative results, although helpful, are somewhat limited in scope. Providing a more diverse set of visualizations, including different types of scenes (e.g., urban, rural, complex intersections), would offer a clearer picture of the method’s practical performance across various real-world scenarios. Technical Quality: 3 Clarity: 3 Questions for Authors: I overall like the idea of this paper. However, my concerns are focused on the experimental results. If the authors can address my concerns regarding the experiments, I am inclined to raise my score. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: no potential negative social impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable suggestions for our paper. We are encouraged by your confirmation of our data representation, memory-efficient hash-based representation, and efficient point sampling. The following are our responses to your concerns: --- ### Q1: Analysis of Memory Usage, Parameter Size, and Inference Time. Please refer to the common response to you in Q1 of Author Rebuttal. --- ### Q2: Detailed Analysis with the 3D Voxel-based Methods. Please refer to the common response to you in Q2 of Author Rebuttal. --- ### Q3: Sensitivity Analysis of Key Parameters. We have conducted the experiments for different settings of the stride size in the F2PS, the number of points in the spherical frustums, and the configurations of the hash table. - **Stride Size in the F2PS:** Due to the computational resource limitation, the ablation studies of four different settings of the stride sizes in the first downsampling layer's F2PS are conducted, including $(1,2),(2,1),(2,4),(4,2)$. Since the point cloud to be downsampled in the first downsampling layer is the densest, the impact of different stride sizes of the F2PS is the most significant. We promise that a thorough analysis will be conducted in the final version of the paper. The results are shown in the table below. The results show for the first F2PS, the $(2,2)$ stride sizes show a better segmentation performance than the other stride size settings. $(2,2)$ stride sizes suitably downsample the point cloud in the vertical and horizon dimensions. Higher or lower downsampling rates result in the oversampling or undersampling of the point cloud respectively. | Stride Sizes $(S_h,S_w)$ | mIoU (%) $\uparrow$ | | -| -| | (2,1) | 60.7| | (1,2) | 62.4 | | (2,4) | 62.3 | | (4,2) | 60.5 | | (2,2) (Ours SFCNet) | 62.9 | - **Number of Points in the Spherical Frustums:** In the spherical frustum structure, the number of points in the frustum is unlimited and only depends on how many points are projected onto the corresponding 2D location. To analyze the effect of the number of points in the frustum, we set the maximal number of points in each spherical frustum and the points exceeding the maximal point number are dropped. As shown in the table below, preserving more points in the spherical frustum results in better segmentation performance, since more complete geometry information is preserved. These results further indicate the significance of overcoming quantized information loss in the field of LiDAR point cloud semantic segmentation. | Maximal Number of Points in Spherical Frustum | mIoU (%) $\uparrow$ | | - | -| | 2 | 61.0| | 4| 61.9 | | Unlimited (Ours SFCNet) | 62.9 | - **Configuration of the Hash Table:** The number of hash functions is the main parameter of the hash table, which means the number of functions used for the hash table retrieval. In the implementation, if the first hash function can successfully retrieve the location of the target point, the other functions will not be used. We change the number of hash functions to show the model sensitivity of hash table configurations. As shown in the table below, the performance and inference time of SFCNet have little difference under different numbers of hash functions. The results show that in most cases, the first function can successfully retrieve the location, and thus the inference times change slightly in different function numbers. These results indicate that SFCNet is robust to different hash table configurations. | Number of Hash Functions | Inference Time (ms) $\downarrow$ | mIoU (%) $\uparrow$ | | - | - | -| | 2| 59.5| 62.9 | | 3 | 60.1 | 62.9 | | 5| 59.5 | 62.9 | | 4 (Ours SFCNet) | 59.7| 62.9 | --- ### Q4: Performance With Different Levels of Noise in LiDAR Data. The gaussian noises with zero mean and different standard deviations, including 0.25m and 0.5m, are added to the original LiDAR point cloud to simulate different levels of noise. As shown in the table below, though the segmentation performance decreases while the level of noise increases, our SFCNet has better performance than the baseline model in all cases. Thus, the proposed SFCNet is more robust to point cloud noises than the baseline model which has quantized information loss. | Method | mIoU (%) $\uparrow$ | |-|-| | Baseline w/o noise| 59.7 | | SFCNet (Ours) w/o noise| 62.9| | Baseline w/ 0.25m noise | 43.8| | SFCNet (Ours) w/ 0.25m noise | 46.6| | Baseline w/ 0.5m noise| 30.7| | SFCNet (Ours) w/ 0.5m noise | 34.8| --- ### Q5: Qualitative Results in More Diverse Types of Scenes. We show the qualitative results in more diverse types of scenes on the SemanticKITTI test set in Fig. 1 of the attached PDF file, including the urban, rural, and complex intersections. As shown in Fig. 1, compared to CENet [1], SFCNet can better segment the distant persons in the complex intersections scene of Fig. 1(c-1) and urban scene of Fig.1 (a-2), poles in the urban of Fig. 1(a-1) and complex intersections scene of Fig. 1(c-2), and trunks in the rural scenes of Fig. 1(b-1) and (b-2). These results validate the consistently better segmentation performance of SFCNet in various real-world scenarios, especially for 3D small objects. --- ### Reference [1] H.-X. Cheng _et al._, ‘Cenet: Toward Concise and Efficient Lidar Semantic Segmentation for Autonomous Driving’, ICME, 2022. --- We hope our response can address your concerns about the experimental results. If you have further problems, please let us know. --- Rebuttal Comment 1.1: Title: Raised my rating to ACCEPT Comment: I appreciate the authors' rebuttal and additional experiments. I have read the comments from other reviewers and the related rebuttal. The rebuttal has effectively addressed my main concerns. Additionally, I highly appreciate the responses to Q3 and Q5, and I suggest incorporating them into the supplementary material. Overall, I believe this is a technically solid paper, and I am inclined to accept it - I have raised my rating from borderline accept to **ACCEPT**. --- Reply to Comment 1.1.1: Comment: Thank you very much for your time and your recognition of our method and response. We promise to incorporate our responses to your Q3 and Q5 into the supplementary material in the final version of our paper.
Rebuttal 1: Rebuttal: We appreciate all reviewers' efforts and valuable suggestions. We are grateful to receive the positive comments from the reviewers, including memory efficiency (Reviewer AnTM, bPaa, BSNT), innovation in spherical frustum structure (Reviewer AnTM, bPaa), and performance "boost" (Reviewer bPaa, BSNT). We have responded to the concerns of each reviewer individually. In addition, we also notice the common concerns, including memory and computational efficiency (Reviewer AnTM, MsCy), and performance gap with 3D voxel-based methods (Reviewer AnTM, MsCy), raised by the reviewers. Thus, we provide the common response to these concerns as follows: --- ### Q1: Analysis of Memory Usage, Parameter Size, and Inference Time (Reviewer AnTM, MsCy). As shown in the table below, the analysis of the memory usage, parameter size, and inference time are conducted through the comparison between our SFCNet and the other baselines. The normalized memory usage and inference time, computed by dividing the whole memory usage and inference time by the number of thousand points, are also shown. The results show: - **Memory Usage:** From the metric of normalized memory usage, the memory efficiency of our SFCNet is better than the point-based methods [1-2] and 3D voxel-based methods [3-4]. As for the 2D projection-based methods, SFCNet has a better memory efficiency than the baseline model and a comparable memory efficiency with RangeViT [5]. The results show that based on memory-efficient hash-based representation, the spherical frustum structure introduces little extra memory consumption and is suitable for practical application. - **Parameter Size:** Since SFCNet has pure 2D convolutional layers, SFCNet has a much smaller parameter size than the transformer-based methods [4-5] and 3D convolution-based methods [3]. In addition, the comparison between SFCNet and the baseline shows the spherical frustum structure does not introduce extra parameters. As for the comparison with the MLP-based methods [1-2], though the parameter size of SFCNet is larger than the MLP-based methods, SFCNet has better segmentation performance. - **Inference Time:** Our SFCNet has the smallest normalized inference time among the compared methods. The results show the spherical frustum structure and hash-based representation are also computationally efficient. | Method | Points | Memory Usage (M) $\downarrow$ | Normalized Memory Usage (M/K) $\downarrow$ | Parameter Size(M)$\downarrow$ | Inference Time (ms) $\downarrow$ | Normalized Time (ms/K) $\downarrow$ | | -| -| -| -| -| -| -| | PointNet++ [1] | ~45K | 841 | 18.69 | 3.8 | 131.0 | 2.91 | | RandLA [2] | ~120K | 1395 | 11.63 | 4.9 | 212.2 | 1.77 | | Cylinder3D [3] | ~40K | 1912 | 47.80 | 214 | 67.5 | 1.69 | | SphereFormer [4] | ~90K | 2678 | 29.76 | 124 | 108.2 | 1.20 | | RangeViT [5] | ~120K | 1344 | 11.20 | 91 | 104.8 | 0.87 | | Baseline | ~90K | 1090 | 12.11 | 44 | 46.4 | 0.52 | | SFCNet (Ours) | ~120K | 1386 | 11.55 | 44 | 59.7 | 0.49 | --- ### Q2: Detailed Analysis of Performance Gap with 3D Voxel-Based Methods. (Reviewer AnTM, MsCy). The 3D voxel-based methods adopt the 3D convolutional kernel to learn the semantic mode in the 3D space. In contrast, SFCNet learns the semantic mode through the 2D convolutional kernel, which has a weaker ability to learn the 3D semantic mode and results in a weaker performance of SFCNet than the 3D voxel-based methods. However, based on the efficiency analysis in Q1, compared to the 3D voxel-based methods, our SFCNet has better computational efficiency, lower memory consumption, and fewer parameters. Thus, though there exists a performance gap between our SFCNet and the 3D voxel-based methods, our SFCNet can realize a trade-off between efficiency and performance and has the potential to be applied in specific scenarios. --- In addition, we provide an attached PDF, where we show the qualitative results in more diverse scenes in Figure 1 for Reviewer AnTM. --- ### Reference [1] C. R. Qi _et al._, ‘Pointnet++: Deep hierarchical feature learning on point sets in a metric space’, NeurIPS, 2017. [2] Q. Hu _et al._, ‘Learning semantic segmentation of large-scale point clouds with random sampling’, TPAMI, 2021. [3] X. Zhu _et al._, ‘Cylindrical and asymmetrical 3d convolution networks for lidar segmentation’, CVPR, 2021. [4] X. Lai _et al._, ‘Spherical Transformer for LiDAR-based 3D Recognition’, CVPR, 2023. [5] A. Ando _et al._, ‘RangeViT: Towards Vision Transformers for 3D Semantic Segmentation in Autonomous Driving’, CVPR, 2023. Pdf: /pdf/0f9876d38800a3b28c161f3c257ee9eb2c2d7928.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Probabilistic Graph Rewiring via Virtual Nodes
Accept (poster)
Summary: This work introduces an approach called implicitly rewired message-passing neural networks (IPR-MPNNs) to address limitations in message-passing graph neural networks (MPNNs): expressiveness and mainly over-squashing. Their method works by adding virtual nodes and learning to connect them to existing nodes in a differentiable, end-to-end manner. Authors claim that previous techniques are not scalable, and state that their approach is more scalable. They empirically show the performance of their proposed model. Strengths: * Their method slighlty improves the computational complexity when using m<<n virtual nodes (otherwise is quadratic as well). * Empirically, their proposed method improves the accuracy in some of the datasets wrt some of the baselines. * Theoretically, IPR-MPNNs exceed the expressiveness of MPNNs, overcoming 1WL test. * The method allows for end-to-end differentiable learning, integrating the rewiring process seamlessly into the training of MPNNs. This is done by few previous approaches (see next section) **[POST-REBUTTAL COMMENT]** After carefully reading the answer from the authors, I acknowledge the improvement of the manuscript after the rebuttal (related work, additional discussion, clarification of misunderstandings, additional experiments...), I increase my score and I support the acceptance of the manuscript. Weaknesses: ### Related work * The state of the art is in general not properly structured and not enough detailed. Although they cite some of the important work, thay do not delve into the detail of the different papers about limitations on oversquashing (Di Giovanni 2023, Black 2023, they cite both of them, but authors dot seem to cite them when talking about limitations or even they dont mention the rewiring methods they propose SDRF and GTR), which is one of the main points in this work. For instance, when talking about graph rewiring they cite several works, no detailing what are the contributions of each and also mixing works of different solutions in the same sentence: MixHop is based on a adaptative message passing and DIGL is rewiring basd on difussion, but authors just say that both works are "considered as graph rewiring as they can reach further-away neighbors in a single layer". I mean, authors mix adaptative MP with spatial rewiring with spectral rewiring, expanders with no explanation about it. I do not think that the level of structure and detail is enough for a paper in this conference. See Di giovanni 2023, Banerjee 2022a(FOSR), 2022b(RLEF), Choi 2024(Pandas), for an example of better structured and detailed explanation of the different rewiring methods. * In addition, they miss very relevant works in the literature about graph rewiring and the analysis of graph limitations that are related with this paper. Specifically, authors evaluate their contribution using commute time or effective resistance. They miss 2 very important rewiring works in this line DiffWire (Arnaiz-Rodriguez, 2022) and FOSR (Banerjee, 2022), where they introduce the oversquashing connectino with the Cheger constant and methods to reduce the effective resistance between nodes to overcome oversquashing. To be even more clear regarding Banerjee 2022: in the references appears RLEF and FOSR, but in the text there is only one reference to RLEF. They also do not mention GTR from the already cited paper Black 2023, which is also based on effective resistance, metric that authors then use to evaluate IPR-MPNN. I suggest the authors to look more carefully also Di Giovanni 2023 on all the impact of not only these but also more works in the graph rewiring literature, and to completely cite all the important first works in this area. * On top of DiffWire and FOSR, authors also miss more important papers in grph rewiring such as Drew (Gutteridge 2023), LASER (Barbero 2024) or Affinity-Aware GNNs (Velingker). * In addition, they miss relevant papers in the literature about virtual nodes, which is one of the main points in this paper. For instance, the two first important works about virtual nodes (Battaglia 2017 and Brüel-Gabrielsson 2022) are just cited in the appendix and/or not even mentioning that they proposed the virtual nodes. Additionally, authors are not aware of works focused on the limitations of virtual nodes and how they are related with oversquashing: Southern 2024 and Geisler 2024. These papers completelly address how *theoretically* virtual nodes affect oversquashing and how they can be used to mitigate it (see methodology weaknesses). I suggest the authors to look more carefully at the literature about virtual nodes and oversquashing, and to completely and properly cite all the important first works in this area correctly stating the main contributions of the prior work. ### Methodology * The contribution of this work is limited since it comes from the combination of virtual nodes and probabilistic message passing. Both ideas were already previously proposed. It would be helpful if authors are more transparent on what are the exact novelty of this method. Virtual nodes ar not novelty as proposed before (Battaglia 2017) and probabilitistic message passing as well. The changes to make this approach novel are not sufficient in my opinion. * Regarding the contributions, only the expressiveness contribution is shown theoretically, and it is the less related with the problem they claim to solve (OVSQ). The rest of the contributions are just shown empirically, which does not advance on the knowledge of the limitations of the GNN field. * Authors do not justify thoretically how their approach improves oversquashing, which should be the main approach of the paper. They just empirically show that the effective resitance and sensitivity are smalles, which is a trivial result of the Rayleigh’s monotonicity principle when densifying a graph. The paper would increase a lot the quality if authors really tryy to understand why their method improve oversauashing and if it really does. Souther 2024 and Geisler 2024 might be helpful for this. To reach the level of Neurips authors should have a more theoretical approach to the problem they are trying to solve. However, it is monstly empirical as they also aknowledge in the section 5 (they answer all the questions just empirically). This is very dangerous, since the empirical results might not be generalizable to other datasets or tasks. Also, in the GNN community is widely known the lack of understanding of how GNNs work if we only look at the results of datasets. * One key contribution is that they do rewiring it in less than quadratic complexity (which is only ok for transphormers but is not ok for rewiring methods). However this is only empirically, since authors also mention l865 "In the worst-case scenario, where m = n, our method exhibits quadratic complexity. However, in all real-world datasets we have tested, the required number of virtual nodes for achieving optimal performance is low". * Also, mention do not delve into the limitation of the virtual and oversquashing (See Southern 2024 and Geisler 2024). For instance, altough the effective resistance is reduced using virtual nodes, the oversquashing is not fully solved since all the messages are squeezed in the virtual nodes, even worsening the problem of exponential aggregation of information in some bottleneck "nodes". As shown in Topping 2022 and Arnaiz-Rodriguez 2022, the blottleneckness of the nodes (measured either by curvature, spectral metrics or centralities), causes oversquashing. * As their main experiments are in graph classification tasks, and the main contribution is based on pairwise scores, they also should compare aagaints CT-Layer in Diffwire (Arnáiz-Rodrigues 2023), which is a rewiring based also in similarity scores that was shown to improve graph classification tasks. Also, authors claim that previous "strategies to mitigate over-squashing rely on heuristic rewiring methods or purely randomized approaches that may not adapt well to a given prediction task". Also, diffwire is in-processing rewiring and takes into account the task. Finally, also Geisler 2024 is task-oriented. * They compare the running time against transformer but not against rewiring, which does not make sense. They propose an architecture to do graph rewiring and alliviate oversquashing, so they should compare with graph rewiring methods. ### Refs P. W. Battaglia, et al. Relational inductive biases, deep learning, and graph networks. 2017. A. Arnaiz-Rodriguez et al. DiffWire: Inductive Graph Rewiring via the Lovász Bound. LoG, 2022. P. K. Banerjee et al. Oversquashing in gnns through the lens of information contraction and graph expansion. AACCC, 2022. R. Brüel-Gabrielsson, M. Yurochkin, and J. Solomon. Rewiring with positional encodings for graph neural networks. 2022 F. Di Giovanni et al. On over-squashing in message passing neural networks: The impact of width, depth, and topology. ICML 2023. M. Black et al. Understanding Oversquashing in GNNs through the Lens of Effective Resistance. ICML 2023. D. Gutteridge et al. Drew: Dynamically rewired message passing with delay. ICML 2023. A. Velingker. Affinity-Aware Graph Networks. NeurIPS 2023. F. Barbero. Locality-Aware Graph Rewiring in GNNs. ICML 2024. J. Choi. PANDA: Expanded Width-Aware Message Passing Beyond Rewiring. ICML 2024 J. Southern. Understanding Virtual Nodes: Oversmoothing, Oversquashing, and Node Heterogeneity. 2024. S. Geisler. Spatio-Spectral Graph Neural Networks. 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: Questions are listed along with each specific identified weaknesses in the previous section. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I summarize the limitations of the work, however, I refer to weaknesses section for a more detailed feedback. * Relate work is not complete and not clearly explained and structured. * The authors do not closely connect how virtual nodes solve oversquahing. * Authors do not discuss limitations of the virtual nodes wrt over-squashing. * All the important contributions are empirical, and the theoretical ones are not very novel. * The core of the idea is not very novel, just a combination of previous work on virtual nodes an probabilistic message passing, but not a big contribution by itself. Also, they mention as contributions that their methods alliviates the limitations of GNNs, but there is no explanation why, just empirical results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and detailed review. Due to the space restrictions for the rebuttal, we address related work and novelty comments here, with the remaining concerns, additional results, and the bibliography in other official comments at the same level. - **Q: Related work.** - **R:** We thank the reviewer for the feedback on our related work sections. We agree that giving a comprehensive overview of current literature is essential. Based on the reviewer's comments, we will improve the related work sections for the final paper. In the following, we discuss the related work that the reviewer has proposed in detail: **Missing concurrent related work:** * **[SN24, GR24]: We would like to point out to the reviewer and the program committee that these are concurrent works that became publicly available on arXiv on 22 May and 29 May, respectively. The NeurIPS Main Conference Paper Submission deadline was 22 May, AoE, so it would have been impossible to include the papers in the related work section at submission time.** --- **Related work proposals that have been discussed:** * **[GE23]:** This is a key work we compare with, citing and discussing it on lines 35, 101-102, and 253. We compare with Drew on the QM9 dataset (Table 2), where we outperform it on 12 out of 13 properties, and on Peptides-func and Peptides-struct (Table 3), where we outperform it on both datasets. * **[BGN22]:** When discussing related rewiring approaches, the first work we mention is [BGN22] (lines 97-98). * **[BA17]:** We include related work regarding Hierarchical MPNNs in our Supplementary Materials. We provide an extensive discussion on both hierarchical methods and virtual node-based methods, including Battaglia 2017 (lines 677-678). * **[BE23]:** We cite this method on line 106. * **[KR22]:** We cite this work on line 35 and briefly discuss it on line 105. We also compare with their FoSR method on the TUDataset collection in Table A6, where we achieve significantly better results on all datasets. --- **Missing related work:** * **[ARZ22, VR23, BO24]:** We thank the reviewer for pointing out that this related work is missing. We will update our manuscript to include the concurrent works of [SN24, GR24] and comparisons with them on the peptides datasets. We will add the missing related work of [ARZ22, VR23, BO24] and expand the discussions on [BE23, KR22]. --- * **Q: Methodology.** * **R:** We thank the reviewer for their suggestions; in the following, we try to respond to their concerns: 1. **Novelty:** While probabilistic rewiring [QN24] and the concept of virtual nodes [BA17] have been explored in the past, combining the two concepts is not trivial. As highlighted in our paper, the key difference between the present work and [QN24] is that, unlike [QN24], where the adjacency matrix is modified directly, we keep the original adjacency matrix fixed and "rewire" by learning a $k$-subset distribution from the original graph nodes to the virtual nodes. Moreover, the virtual nodes can have different dimensions, connecting our message-passing scheme to heterogeneous message-passing and hierarchical MPNNs. Unlike existing hierarchical MPNNs that employ virtual/super nodes, e.g., [BA17, PM17, LI17, CI23], our IPR-MPNNs uniquely leverage differential $k$-subset sampling and incorporate hierarchical message-passing in an end-to-end framework. We discuss our connection to hierarchical message-passing and virtual nodes-based methods in the Supplementary materials, lines 663-689. 2. **Theoretical guarantees:** Our primary goal is to improve the long-range capabilities of MPNNs while keeping complexity, memory usage, and runtimes low. Long-range dependency modeling is affected by over-squashing and under-reaching, but they are not our core theoretical concern. In our work, the theoretical guarantees of expressiveness, along with the over-squashing and under-reaching analyses, indicate where the performance gains might come from. 3. **Empirical results might not be generalizable:** Our experimental section offers a comprehensive comparison on both real-world molecular datasets (ZINC, OGBG-Molhiv, QM9, Peptides-struct, Peptides-func, PCQM-Contact, TUDataset) and synthetic datasets (Trees-LeafCount, Trees-LeafMatch, CSL, EXP), where we generally achieve state-of-the-art results. Our empirical evidence shows that our method is effective in addressing over-squashing through model sensitivity (Fig. 2), effective resistance (Fig. 3), and the Trees-NeighborsMatch dataset (Fig A4). We also outperform other rewiring methods (Drew, PR-MPNN, AMP) and graph transformers (GPS, Exphormer, GRIT, etc.) on long-range tasks (Table 3) and various molecular datasets (QM9 in Table 2, ZINC and OGB-Molhiv in Table 4, TUDataset in Table A6). Thus, we argue that IPR-MPNNs are an efficient, state-of-the-art class of MPNNs for both common molecular datasets and those requiring long-range interactions, with strong empirical evidence that indicates an alleviation of over-squashing. 4. **The paper is not theoretical enough for NeurIPS:** NeurIPS Reviewer Guidelines [NS24-1] state that "_claims can be supported by **either theoretical analysis or experimental results**_". NeurIPS has historically included many empirical works and remains a generalist conference, suitable for applications and general machine learning topics, as per the NeurIPS 2024 Call for Papers [NS24-2]. We continue addressing the raised methodology issues in the next official comment. --- Rebuttal 2: Title: Rebuttal - P2 (cont. Methodology) Comment: * **Connections with [GR24, SN24]: We want to reiterate that [GR24, SN24] were not publicly available at the time of submission.** Investigating the connections between IPR-MPNNs and these works would be very interesting. [GR24] is similar to IPR-MPNNs in its virtual node interpretation but differs fundamentally as it uses spectral layers to connect virtual nodes to the base graph, while we use a probabilistic, differentiable k-subset sampling approach to connect the base graph to the virtual graph in a data-driven manner. Our method is also similar to [SN24] since they also employ a form of heterogeneous message-passing. However, they do not use multiple virtual nodes and do not sparsify the connections between the original graph and the virtual nodes. Therefore, the theory in [SN24] is not directly applicable to our probabilistic approach. We will update the manuscript with a discussion regarding our connections with the two papers. --- * **Quadratic worst-case complexity:** We acknowledge in our Supplementary that the worst-case complexity of our model is quadratic. However, it is quadratic only in the number of virtual nodes, thereby quadratic in the number of original nodes only if the number of virtual nodes is greater or equal to the number of original nodes. **If the original graph is complete, MPNNs also have quadratic complexity in the number of nodes. Any rewiring method that might lead to a complete graph is potentially quadratic.** This is an improbable scenario for both general rewiring methods and IPR-MPNNs. We demonstrate that using up to 8 virtual nodes provides state-of-the-art performance (Table A5) while remaining sub-quadratic in the number of original nodes. We also analyze performance for up to 30 virtual nodes in Table A9, showing runtime and memory improvements over Graph Transformers even in the worst-case scenario. --- * **Virtual nodes are bottlenecks:** The primary motivation for our probabilistic layer that selectively connects the initial graph to the virtual nodes is to alleviate potential information bottlenecks between the base and virtual graphs. This acts as a sparsification operation, reducing the information transmitted to individual virtual nodes. This is different from [SN24] since they analyze a single virtual node connected to the entire original graph. Our hierarchical, heterogeneous message-passing scheme allows for bigger hidden dimensions for the virtual graph, which can also help alleviate bottlenecks. Empirically, we show that a low number of virtual nodes is sufficient for passing long-range information in most practical scenarios. For example, in the Trees-NeighborsMatch results (Figure A4), IPR-MPNNs perform as well as a Transformer up to a depth of 6, using only two virtual nodes. We thank the reviewer for the question - this is a very good point, and we will add a more nuanced discussion about possible bottlenecks in the virtual graph in our next paper revision. --- * **No comparison with [ARZ22]:** We compare with publicly available results of various rewiring approaches across multiple datasets: Drew, SPN, and PR-MPNN on QM9 (Table 2); Drew, PR-MPNN, and AMP on LRGB (Table 3); PR-MPNN on ZINC and OGB-Molhiv (Table 4); and FoSR (Table A6). Re-running all possible baselines is very expensive and time-consuming. [ARZ23] contains some results that we can compare with, and we have also performed new experiments with IPR-MPNNs on the heterophilic datasets reported in their paper. As can be seen in the next official comment, we obtain significantly better results than DiffWire on the molecular and heterophilic datasets. --- * **No comparisons with [GR24, SN24]:** The authors of the two parallel works, which became available after the NeurIPS submission deadline, also report results on peptides. As can be seen in the next official comment, our method obtains overall better scores on the Peptides datasets when compared with [SN24], and is competitive with [GR24], obtaining slightly worse results on func, and slightly better results on struct. We will update the tables for the revision, including results from [ARZ22, SN24, GR24] in our paper. --- * **No runtime comparison with other rewiring methods:** We compare with one rewiring method (PR-MPNN) in our extended runtime analysis (Table A9 in the Supplementary). Additionally, we include comparisons with the base GINE in both tables as a lower bound for required resources. In all scenarios, IPR-MPNNs' overhead over the base GINE model is generally low (see Table 1 and Table A9). We have also performed an additional experiment comparing IPR-MPNNs with Drew [GE23]. We fix the same number of message-passing layers with a similar number of parameters, and we run the Drew model from the code provided by the authors on the same hardware. The two methods are comparable, with IPR-MPNNs being slightly faster, with slightly higher memory requirements (please see the next official comment). --- Rebuttal 3: Title: Rebuttal - New comparisons and experiments Comment: --- **Comparison with DiffWire [ARZ22] on molecular datasets:** | Datasets | GAP (R) [ARZ22] | GAP (N) [ARZ22] | IPR-MPNN | |----------|-------------|-------------|--------------| | MUTAG &uarr;| 86.9 ± 4.0 | 86.9 ± 4.0 | **98.0 ± 3.4** | | PROTEINS &uarr;| 75.0 ± 3.0 | 75.3 ± 2.1 | **85.4 ± 4.4** | --- **Comparison with various methods (incl. DiffWire, SDRF) on the heterophilic WebKB datasets** | Methods | Cornell &uarr; | Texas &uarr; | Wisconsin &uarr; | |-------------------------------|-------------------------|-------------| --- | | GINE | 0.448±0.073 | 0.650±0.068 | 0.517±0.054 | | DIGL [GR19] | 0.582±0.005 | 0.620±0.003 | 0.495±0.003| | Geom-GCN [PI20] | 0.608±N/A | 0.676±N/A | 0.641±N/A | | SDRF [TG21] | 0.546±0.004 | 0.644±0.004 | 0.555±0.003 | | DiffWire [ARZ22] | 0.690±0.044 | N/A | 0.791±0.021 | | GPS [RK22] | 0.718±0.024 | 0.773±0.013 | 0.798±0.090 | | Graphormer [YG21] | 0.683±0.017 | 0.767±0.017 | 0.770±0.019| | IPR-MPNN | __0.764±0.056__ | __0.808±0.052__ | __0.804±0.052__ | --- **Comparison with S2GCN [GR2024] and GatedGCN+VN$_G$ [SN2024] on the long-range peptides datasets.** | Model | Func &uarr; | Struct &darr; | |--------------------------------|-------------------------|------------------------| | S2GCN [GR2024] | 0.7275 ± 0.0066 | 0.2467 ± 0.0019 | | S2GCN+PE [GR2024] | **0.7311 ± 0.0066** | 0.2447 ± 0.0019 | | GatedGCN+PE+VN$_G$ [SN2024, best] | 0.6822 ± 0.0052 | 0.2458 ± 0.0006 | | IPR-MPNN | 0.7210 ± 0.0039 | **0.2422 ± 0.0007** | --- **Runtime comparison with the Drew rewiring method [GE23].** | Feature | IPR-MPNN | Drew [GE23] | |----------|------------------|------------------| | Model | IPR-MPNN | Drew | | Params | 536k | 522k | | Mem | 1.9GB | 1.8GB | | s/train | 2.98s ±0.02 | 3.20s ±0.03 | | s/val | 0.27s ±0.00 | 0.36s ±0.00 | --- Rebuttal 4: Title: Rebuttal - References Comment: [SN24]: Southern, J. et al. “Understanding Virtual Nodes: Oversmoothing, Oversquashing, and Node Heterogeneity”, arXiv 2024 [GR24]: Geisler, S. et al., “Spatio-Spectral Graph Neural Networks”, arXiv 2024 [GE23]: Gutteridge, B. et al., “DRew: Dynamically Rewired Message Passing with Delay”, ICML 2023 [BGN22]: Brüel-Gabrielsson, R. et al., “Rewiring with Positional Encodings for Graph Neural Networks”, TMLR 2023 [BA17]: Battaglia, P.W. et al., “Relational inductive biases, deep learning, and graph networks”, arXiv 2018 [BE23]: Banerjee, P.K. et al., “Oversquashing in GNNs through the lens of information contraction and graph expansion”, ALLERTON 2022 [KR22]: Karhadkar, K. et al., “FoSR: First-order spectral rewiring for addressing over-squashing in GNNs”, ICLR 2023 [ARZ22]: Arnaiz-Rodriguez, A. et al., “DiffWire: Inductive Graph Rewiring via the Lovász Bound”, LoG 2022 [VR23]: Velingker, A. et al., “Affinity-Aware Graph Networks”, NeurIPS 2023 [BO24]: Barbero, F. et al., “Locality-Aware Graph Rewiring in GNNs”, ICML 2024 [QN24]: Qian, C. et al., “Probabilistically-Rewired Message-Passing Neural Networks”, ICLR 2024 [PM17]: Pham, T. et al., “Graph Classification via Deep Learning with Virtual Nodes”, arXiv 2017 [LI17]: Li, J. et al., “Learning graph-level representations for drug discovery”, arXiv 2017 [CI23]: Cai, C. et al., “On the connection between MPNN and Graph Transformers”, ICML 2023 [GR19]: Gasteiger, J., Weißenberger, S., and Günnemann, S. "Diffusion improves graph learning." NeurIPS 2019. [PI20]: Pei, H., et al. "Geom-gcn: Geometric graph convolutional networks." ICLR 2020. [TG21]: Topping, J., et al. "Understanding over-squashing and bottlenecks on graphs via curvature." ICLR 2022. [RK22]: Rampášek, L., et al. "Recipe for a General, Powerful, Scalable Graph Transformer." NeurIPS 2022. [YG21]: Ying, C., et al. "Do Transformers Really Perform Bad for Graph Representation?" NeurIPS 2021. [NS24-1]: NeurIPS 2024 Reviewer Guidelines [NS24-2]: NeurIPS 2024 Call For Papers --- Rebuttal Comment 4.1: Comment: Thanks for your thoughtful rebuttal. It looks like some of my questions/points were valid and some had to do with a misunderstanding of the work as it is currently presented. Thank you for taking the time to explain which was which. I also acknowledge some misunderstanding of some parts of the paper on my part. I reply to specific parts where I still do not fully agree with the authors: **[Related work]** Regarding related work, I pointed out several (+10, most of them not concurrent and even *seminal*) works that are quite aligned and related to the topic the authors propose (rewiring, analysis of why and how rewiring works and virtual nodes). So I still think my point about missing related work was valid. Regarding the concurrent work, of course I agree and I am aware with the authors that these 2 works are concurrent. However, I think that these works, especially SN24, are of special interest since they provide theoretical insights and limitations and the interplay of virtual nodes and the over-problems. Even if this work is concurrent, the theoretical insights identified in this paper are crucial for this work. They find that a virtual node mitigates over-squashing only for some graphs. They discover that the average commute time is reduced when a virtual node is added, and also that the Jacobian between 2 nodes is independent of the distance between them if they are separated by more than 2 hops after a virtual node is added (only for some classes of graphs). **[Theoretical explanations of VN-Overproblems]** I still maintain my position as the authors do not prove theoretically that their method solves over-quashing (as SN24 does), but they do an empirical analysis about it. Don't get me wrong, I don't think a deeper theoretical analysis is necessary for the proposed paper, but the authors claim several times that "the method reduces over-..." just based on experimental results. Considering that a theoretical analysis might be aimed for a follow-up paper, I suggest the authors to lower the claims about solving over-smoothing and over-squashing and frame it as a result of the empirical work. Good empirical work can stand on its own, especially in practically oriented papers like this one, but the statements need to be in tone with the paper and the community in which it is presented. ***After reading the explanations of my concerns, the discussion of related work, the discussion of introducing bottlenecks, and the extensive additional experiments (runtime and baselines) performed by the authors, I raise my score from 3 to 5 and advocate acceptance of the paper.***. --- Reply to Comment 4.1.1: Comment: We thank the reviewer for their positive re-assessment of our work! As we mentioned in the rebuttal, we will include the missing related work and the concurrent works of [SN24, GR24] in the final version of the paper. We agree that there are some strong connections between our work and [SN24, GR24], and we will add a discussion where we highlight the similarities and differences between the approaches. We will also lower our claims by clarifying that we only have proof for higher expressiveness in the WL sense but no direct theoretical evidence that we alleviate OSM/OSQ. We will only highlight that the empirical evidence and performance indicate that this _might_ be the case. For further work, it would also be very interesting to investigate whether the theoretical results from [SN24] could be extended to our probabilistic method and if our sparsification via sampling strategy adds any theoretical benefits, as is the case for WL expressiveness. Once again, we thank the reviewer for their feedback and for engaging with us during the rebuttal discussions period! --- Rebuttal 5: Title: Please Engage in Discussion Comment: Dear Reviewer 5s4E, Thank you for your time and efforts throughout the review period. Please read the authors' rebuttal as soon as possible and indicate if their responses have addressed all your concerns. Best, Area Chair
Summary: The paper proposes implicitly rewired message-passing neural networks (IPR-MPNNs), which integrate implicit probabilistic graph rewiring into MPNNs. This method involves introducing a small number of virtual nodes into the graph, allowing for long-distance message propagation without the quadratic complexity associated with graph transformers. Strengths: 1. The introduction of implicit probabilistic graph rewiring via virtual nodes is a novel contribution that addresses the limitations of traditional MPNNs, such as under-reaching and over-squashing, without incurring the high computational cost of graph transformers. 2. The authors provide a theoretical analysis demonstrating that IPR-MPNNs exceed the expressive capacity of standard MPNNs, typically limited by the 1-dimensional Weisfeiler-Leman algorithm. Weaknesses: 1. It remains unclear to me how the unnormalized node priors $\theta$ are derived. On Line 170, the authors mentioned that $\theta \in \Theta$, but what is $\Theta$? According to Line 169, is it the output of the upstream GNN? Why can we trust that it accurately reflects the original node-virtual node connectivity? Is there any related optimization objective? 2. Gradient estimators for graph sampling are not a new topic. The authors may refer to [1, 2]. Providing related ablations might be more convincing. 3. I do not fully understand Figure 2. As stated by the authors, the values in the figure are about "the two most distant nodes," but why is this referred to as model sensitivity? How is this sensitivity defined? 4. Although the authors provided a complexity analysis, I believe it is necessary to provide numerical comparisons, such as the per epoch time and wall-clock time before and after using IPR-MPNN. [1] Robust Graph Representation Learning via Neural Sparsification, NeurIPS22 [2] Differentiable Graph Module (DGM) for Graph Convolutional Networks, TPAMI Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Can you provide an algorithm table that details the training process of IPR-MPNN? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback on our work. In the following, we will try to address the concerns raised by the reviewer: - **W1:** How are the unnormalized priors $\theta$ obtained? Is there any related optimization objective? - **RW1:** The unnormalized priors are the output of the upstream MPNN. Specifically, after some message-passing steps, we use a shared linear readout layer for each node to obtain $\theta$. If we have $N$ virtual nodes, we obtain a $1\times N$ unnormalized prior $\theta$ for each of the original nodes. These priors are then sent to the sampling layer, ensuring exactly $k$ virtual nodes are selected for each original node. There is no related optimization objective for the priors; we estimate the gradients for the sampling layer using an exactly-$k$ gradient estimator (SIMPLE). --- - **W2:** Gradient estimation for graph sampling is not a new topic. - **RW2:** We agree that gradient estimation for graph sampling is not novel by itself. We discuss similar approaches in the “Graph Structure Learning” section of Related Work in the Supplementary Materials (lines 636-648). However, our work's novelty lies in leveraging differential $k$-subset sampling for probabilistic rewiring and incorporating hierarchical message passing in an end-to-end trainable framework. We show how this probabilistic framework increases model expressiveness and benefits modeling long-range interactions. Additionally, our work differs from previous approaches by using state-of-the-art $k$-subset sampling algorithms (SIMPLE), while most previous works use $1$-subset sampling and estimate gradients with the Gumbel-Softmax trick. Regarding possible ablations - we show how the model sensitivity changes when different numbers of virtual nodes are used in Figure 2. We also provide a new ablation for the number of virtual nodes and samples. Our approach treats the virtual nodes and original nodes as heterogeneous nodes, therefore, for fair comparison, we use the same heterogeneous MPNN on MPNN+VN experiments, without reducing the number of parameters. | Model | ZINC &darr; | ogb-molhiv &uarr; | peptides-func &uarr; | peptides-struct &darr; | |---------------------|----------------------|----------------------|---------------------|---------------------| | 1VN - FC | 0.074 ± 0.002 | 0.753 ± 0.011 | 0.7039 ± 0.0046 | 0.2435 ± 0.0007 | | IPR-MPNN 2VN1S | 0.072 ± 0.004 | 0.762 ± 0.014 | 0.7146 ± 0.0055 | 0.2472 ± 0.0014 | | IPR-MPNN 2VN2S | **0.067 ± 0.001** | **0.788 ± 0.006** | **0.7210 ± 0.0039** | **0.2422 ± 0.0007** | | Model | peptides-func &uarr; | |----------|---------------------| | 1VN - FC | 0.7039 ± 0.0046 | | 2VN1S | 0.7146 ± 0.0055 | | 2VN2S | **0.7210 ± 0.0039** | | 2VN4S | 0.7145 ± 0.0020 | In most practical scenarios that we have tested on, having more than 2 samples doesn’t positively affect the performance but can lower the standard deviation between runs. We see that using 4 samples decreases overall performance, but the standard deviation for the 5 runs is much lower. --- - **W3:** Sensitivity analysis is unclear. - **RW3:** We detail this on lines 756-758 of our Supplementary Materials. We compute the logarithm of the symmetric sensitivity between the most distant nodes $u$, $v$, i.e., $ln\left(\left| \frac{\partial \mathbf{h}^{l}_v}{\partial \mathbf{h}^{k}_u} \right| + \left| \frac{\partial \mathbf{h}^{l}_u}{\partial \mathbf{h}^{k}_v} \right|\right)$, where $k$ to $l$ represent the intermediate layers. We thank the reviewer for pointing out that this is unclear; we will move these details to the main text of our work for the final version of our paper. --- - **W4:** Runtime comparisons for IPR-MPNN. - **RW4:** The paper includes two comparisons. The first, in Table 1, is between the base GINE model, IPR-MPNNs, and the GPS Graph Transformer. We show that our method has similar inference times and memory consumption to the base model, while being significantly more efficient than GPS. The second comparison is in Table A9 in the Supplementary Materials, where we compare various configurations of IPR-MPNN with the base GINE model, variants of the SAT Graph Transformer, GPS, and variants of the PR-MPNN rewiring method. In most cases, we perform better than the other models while maintaining similar performance and memory consumption to the base GINE model. --- - **Q1:** Algorithm table for training IPR-MPNN. - **RQ1:** We thank the reviewer for their suggestion; we will include an algorithm table/pseudocode for IPR-MPNN in the final version of our paper. --- We want to thank the reviewer for their suggestions and kindly ask the reviewer to increase their score if they are satisfied with our response. We are happy to answer any remaining questions! --- Rebuttal Comment 1.1: Comment: Thank you for your response. After reading the response and other reviewers' comments, I choose to maintain my score. --- Reply to Comment 1.1.1: Comment: We once again thank the reviewer for their suggestions and questions!
Summary: The paper introduces a method (IPR-MPNN) which connects nodes to a small number of virtual nodes in a learnable way. The proposed approach is more expressive than MPNN whilst circumventing quadratic complexity. It can reduce oversquashing and performs well on various benchmarks. Strengths: - The benefits of the approach in the context of MPNNs (Expressivity) and Graph Transformers (Complexity) are well argued and I particularly like the probabilistic argument to ensure that isomorphic graphs are not separated with high probability. - The results are extremely good and the approach while quite simple, achieves SOTA on various benchmarks. - To the best of my knowledge the approach is novel due to the learnable rewiring, whilst there are interesting connections to other methods (Hierarchical MPNNs, node marking etc) which will interest the community. Weaknesses: - Whilst the method is shown well in the context of MPNNs and GTs, the probabilistic rewiring is less well motivated in the context of standard virtual nodes. Adding a baseline of MPNN + VN (with your number of VNs) would help and your results seem like they would be much better. Additionally, explaining why only connecting a fraction of nodes to the VN (instead of all) helps in terms of expressivity could also be mentioned/explored - this seems like it would be the case intuitively. You could also compare the effective resistance of your IPR-GNN graph to one where we just have a single global VN. As MPNN + VN is quite popular in the community, this would help us understand why we should favour your approach for certain tasks. - In line 68, you suggest that your approach helps with "scaling to large graphs effectively". To me, it is not clear if this is the case. Firstly, the results seem to be mainly on small molecular datasets. Additionally, whilst you only use a small number of VNs on these datasets (eg. 2), This can still be 10% of the number of nodes in the graph. It is not clear if for large graphs you also need to have 10% VNs compared to total number of nodes. If this is the case, would your approach still be practical? Technical Quality: 4 Clarity: 4 Questions for Authors: - Do you have some explanation for why your approach improves over GTs? Can GTs not replicate this method? - It would be useful to the community to mention the fraction of nodes which are connected to the VNs on some of these datasets. Connecting to a single node seems similar to node-marking and to all nodes would be equivalent to standard VN. It is interesting to see how this approach falls in this region for certain tasks. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: some limitations have been addressed in the appendix. For example: "We have designed our rewiring method specifically to solve long-range graph-level tasks" - I assume the model is flexible here though in that it could also only use local interactions by connecting to. single node or node neighbours. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our work. In the following, we will respond to the raised issues: - **W1:** Empirical and theoretical comparison with MPNN + VN. - **RW1:** We thank the reviewer for the suggestion. The paper currently contains one such comparison in Table 2 - R-GIN-FA is a Relational GIN model with a fully-connected virtual node. Nevertheless, we also conducted the following experiments with the base model and a fully-connected virtual node, which we will include in the final version of our paper: Our approach treats the virtual nodes and original nodes as heterogeneous nodes, therefore, for fair comparison, we use the same heterogeneous MPNN on MPNN+VN experiments, without reducing the number of parameters. | Model | ZINC &darr; | ogb-molhiv &uarr; | peptides-func &uarr; | peptides-struct &darr; | |---------------------|----------------------|----------------------|---------------------|---------------------| | 1VN - FC | 0.074 ± 0.002 | 0.753 ± 0.011 | 0.7039 ± 0.0046 | 0.2435 ± 0.0007 | | IPR-MPNN 2VN1S | 0.072 ± 0.004 | 0.762 ± 0.014 | 0.7146 ± 0.0055 | 0.2472 ± 0.0014 | | IPR-MPNN 2VN2S | **0.067 ± 0.001** | **0.788 ± 0.006** | **0.7210 ± 0.0039** | **0.2422 ± 0.0007** | The advantages of having sparse, learnable connections over fully-connected virtual nodes are as follows: * **Bottlenecks** - The network is less likely to suffer from information bottlenecks in the virtual graph since not all original nodes are connected to the virtual nodes. * **Expressiveness** - adding virtual nodes connected to the entire base graph would not make the MPNN more powerful than 1-WL. Consider the following proof sketch: Assume that we have a graph $G$ with some stable $1$-WL coloring. Now, for each node $v$ we recursively unroll the neighborhoods into a rooted tree of sufficient depth. The isomorphism type of this tree is $1$-to-$1$ to the stable color of node $v$. Now, if we consider all vertices, we have a forest of unrolling trees. We create a new vertex (the virtual node) and connect the roots of the trees in the forest to this new vertex. Then, the isomorphism type (of this forest) is $1$-to-$1$ to the isomorphism type of the original graph modulo $1$-WL; therefore, if some other graph with a virtual node has the same $1$-WL coloring, the two graphs remain indistinguishable. On the other hand, we can attach the virtual node to the original graph and unrolling from the virtual node. However, this tree will be isomorphic to the above constructed tree (forest + root vertex). Therefore, the graphs remain indistinguishable. We will add a more nuanced discussion regarding the comparison with other virtual node approaches in the final version of our paper. --- - **W2:** Would the approach be practical for large graphs? - **RW2:** We thank the reviewer for pointing out the inconsistency. We have performed new experiments on heterophilic datasets, which have a significantly greater number of nodes per graph. We report here the results: | Methods | Cornell &uarr; | Texas &uarr; | Wisconsin &uarr; | |-------------------------------|-------------------------|-------------| --- | | GINE | 0.448±0.073 | 0.650±0.068 | 0.517±0.054 | | DIGL [GR19] | 0.582±0.005 | 0.620±0.003 | 0.495±0.003| | Geom-GCN [PI20] | 0.608±N/A | 0.676±N/A | 0.641±N/A | | SDRF [TG21] | 0.546±0.004 | 0.644±0.004 | 0.555±0.003 | | DiffWire [ARZ22] | 0.690±0.044 | N/A | 0.791±0.021 | | GPS [RK22] | 0.718±0.024 | 0.773±0.013 | 0.798±0.090 | | Graphormer [YG21] | 0.683±0.017 | 0.767±0.017 | 0.770±0.019| | IPR-MPNN | __0.764±0.056__ | __0.808±0.052__ | __0.804±0.052__ | Moreover, we also report results on the newly-introduced heterophilic datasets from [PV23]. These datasets contain more challenging scenarios and have a significantly higher number of nodes per graph (22K for roman-empire, 11k for tolokers, 24k for amazon-ratings, 10k for minesweeper). We can run all IPR-MPNN experiments with a memory consumption of at most 10GB. | Model | Roman-empire &uarr; | Tolokers &uarr; | Minesweeper &uarr; | Amazon-ratings &uarr; | |------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------| | GINE (base) | 0.476±0.006 | 0.807±0.006 | 0.799±0.002 | **0.488±0.006** | | IPR-MPNN (ours) | **0.839±0.006** | **0.820±0.008** | **0.887±0.006** | 0.480±0.007 | --- - **Q1:** Why do we see improvements over GTs? - **RQ1:** This is a great question. We posit that the improvements come from having a stronger graph inductive bias than most GTs, where the graph is assumed to be complete. By performing all of our computations using message-passing, we keep a strong inductive bias, while the virtual nodes with learnable connections act similarly to a hard, sparse attention mechanism, thereby allowing for long-range messages to be passed while still maintaining a strong locality bias. Graph Transformers might be able to replicate this method, assuming a sparsification of the attention matrix, with a bias towards the GT “latent graph” being more similar to the original graph, thereby maintaining a better graph bias. --- - **Q2:** How many virtual nodes are used? - **RQ2:** In Table A5 from the Supplementary Materials, we report the number of virtual nodes used for most experiments. A low number (2-8 virtual nodes) is effective for most tasks. --- We want to thank the reviewer for their suggestions and kindly ask the reviewer to increase their score if they are satisfied with our response. We are happy to answer any remaining questions! --- Rebuttal Comment 1.1: Comment: Thank you very much for the detailed response to my questions. The discussion of the advantages of having sparse, learnable connections over fully-connected virtual nodes will indeed make the paper stronger and the results look good for large graphs. Adding the number of virtual nodes (as a fraction of total number of nodes) used will be useful here in understanding the scaling. Because of this I have raised my score and will push for acceptance. --- Reply to Comment 1.1.1: Comment: We once again thank the reviewer for their positive assessment of our work and for their comments, which helped us improve our paper! --- Rebuttal 2: Title: Rebuttal - References Comment: [GR19]: Gasteiger, J., Weißenberger, S., and Günnemann, S. "Diffusion improves graph learning." NeurIPS 2019. [PI20]: Pei, H., et al. "Geom-gcn: Geometric graph convolutional networks." ICLR 2020. [TG21]: Topping, J., et al. "Understanding over-squashing and bottlenecks on graphs via curvature." ICLR 2022. [ARZ22]: Arnaiz-Rodríguez, A., et al. "Diffwire: Inductive graph rewiring via the Lovász bound." LoG 2022. [RK22]: Rampášek, L., et al. "Recipe for a General, Powerful, Scalable Graph Transformer." NeurIPS 2022. [YG21]: Ying, C., et al. "Do Transformers Really Perform Bad for Graph Representation?" NeurIPS 2021. [PV23]: Platonov, O., et al. "A critical look at the evaluation of GNNs under heterophily: Are we really making progress?", ICLR 2023
Summary: This paper introduces Implicitly Rewired Message-Passing Neural Networks (IPR-MPNNs) to address the limitations of traditional MPNNs, such as under-reaching and over-squashing. By integrating implicit probabilistic graph rewiring and adding a small number of virtual nodes, IPR-MPNNs enable long-distance message propagation without quadratic complexity. This approach enhances expressiveness and computational efficiency. Strengths: 1. The paper is well-written. 2. The experimental performance is good, demonstrating promising improvements on molecule datasets. 3. The method effectively overcomes the quadratic complexity of graph transformers while capturing long-range information. Weaknesses: 1. **Risk of Over-Smoothing**: - While IPR-MPNNs alleviate squashing and under-reaching, the approach of connecting all nodes via virtual nodes increases the risk of over-smoothing. A discussion on over-smoothing analysis would strengthen the paper. 2. **Applicability to Node-Level Tasks**: - The method should be directly applicable to node-level tasks, where long-range interactions are particularly beneficial. Conducting experiments on node-level datasets requiring long-range interactions and comparing with existing works would enhance the paper. 3. **Unclear Method Description**: - The benefit of sampling \( H \) \( q \) times, its implications, and the associated costs and disadvantages need further clarification. - It is unclear how to ensure that the union of the inverse assignments covers the entire set of original nodes. - The initial set of \( G_c \) is not clearly defined. 4. **Runtime Measurement**: - Better reporting of runtime measurements with respect to the number of \( q \) would provide valuable insights. 5. **Notation Confusion**: - There is some notation confusion, such as between \( h^{(t)} \) and \( H^{(q)} \), which needs to be clarified. 6. **Optimal Parameters and Ablation Studies**: - Reporting the optimal \( m \) and \( q \) for each experiment and conducting ablation studies on these parameters would provide insights on how to choose them effectively. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Does the method exacerbate over-smoothing? 2. Why is it necessary to sample $q$ times? Does this serve a similar function as multi-head attention? 3. How do you ensure that the union of the inverse assignments covers the entire set of original nodes? 4. What is the initial set of $G_c$ ? 5. What is the runtime when $q$ is large? 6. How does the method perform on node-level tasks that require long-range interactions? 7. Can more ablation studies be conducted to provide additional insights? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their assessment and feedback. In the following, we respond to the reviewer’s questions. Please note that the tables containing new experiments are in the next official comment and the pdf file attached to the “Global Response”. - **Q1, Q2:** Do IPR-MPNNs increase the risk of over-smoothing? Experiments on node-level tasks would strengthen the paper. - **R1, R2:** The reviewer raises a valid concern. Over-smoothing happens when nodes close to each other collapse to similar representations after several message-passing steps. We provide intuitive arguments and empirical results showing our method does not increase the risk of over-smoothing: * Over-smoothing is more likely if the virtual complete graph connects to the entire original graph since all nodes would get the same embedding from the virtual nodes during the final update. We sparsely connect original nodes to virtual nodes, which can be seen as a clustering step. This allows distant nodes of the same class to directly pass messages via the same virtual node, potentially reducing over-smoothing when compared to using virtual nodes connected to the entire graph. Therefore, it should be easier to find IPR-MPNN configurations that better handle heterophilic, long-range tasks. * To empirically address the reviewers' concern and to verify whether IPR-MPNNs can deal with node classification, we perform experiments on the WebKB datasets and some newly-proposed heterophilic datasets [PV23], which contain node classification tasks in heterophilic scenarios. We generally outperform the methods we compare with. We leave the results in the next official comment. Thank you for your question. We will update the manuscript to include a discussion on over-smoothing and the new experiments, showing improvements over baselines on heterophilic, node-level tasks. --- - **Q3:** The benefits and drawbacks of sampling are unclear. - **R3:** We thank the reviewer for raising the concern about clarity. In the following, we will try to highlight the benefits and drawbacks of sampling: * **Benefits**: the main advantage of sampling is obtaining multiple virtual node assignments for the base nodes. They can then be used to obtain different final embeddings for the same graph. From an optimization perspective, sampling multiple configurations makes it easier for the upstream model to explore the combinatorial solution space. From a representation perspective, it effectively augments the data with multiple virtual node configurations. As the reviewer pointed out, this could function similarly to multi-head attention with shared parameters. * **Costs**: we add all of the sampled configurations to the same batch, feeding the batch to the downstream model. The sample embeddings are computed in parallel, and the downstream parameters are shared. The overhead for this is typically low. The computation times for multiple samples and multiple virtual nodes are available in Table A9 in our Supplementary material. We leave the tables in the next official comment for convenience. In scenarios with 2-4 virtual nodes and 1-2 samples, our method greatly improves computation time over GraphGPS, with memory usage similar to GINE. Even with 30 virtual nodes and 20 samples, we remain faster and more memory-efficient than GraphGPS. --- - **Q3.2:** It is unclear how to ensure that the union of the inverse assignments covers the entire set of original nodes. - **R3.2:** We thank the reviewer for pointing out that this is unclear. When sampling virtual nodes, we sample exactly $k$ virtual nodes for each of the original nodes; therefore, each original node will have connections to exactly $k$ nodes in the virtual graph. We will clarify the text on lines 183-191 in the camera ready: “(...) where each original node $v\in V(G) := [n]$ is assigned to $k \in [m]$ virtual nodes, _therefore, all of the nodes in original graph will have exactly $k$ edges connecting to the virtual graph._” --- - **Q3.3:** The C(G) set of virtual nodes is not well defined. - **R3.3:** Thank you for pointing to this unclarity. The virtual node set C(G) is a set of new nodes that form a complete virtual graph - we will expand the text on lines 183-191 for the next revision: “and a _fixed, new_ virtual node set $C(G):=[m]$ of cardinality $m$” --- - **Q4:** Runtimes with respect to the number of samples. - **R4:** We report runtimes based on the number of samples and virtual nodes in Table A9 of our Supplementary materials. We will move some results from the Appendix to the main paper for the final revision. --- - **Q5:** $H^{(q)}$ and $h^{(t)}$ are a confusing notation. - **R5:** Thank you for highlighting the unclear notation. $H^{(q)}$ represents the matrix of new connections between original and virtual nodes, while $h^{(t)}$ represents the hidden embeddings of the original nodes after $t$ message-passing layers. We will clarify this in the final version. --- - **Q6:** Optimal hyperparameters and ablation studies. - **R6:** The hyperparameters used in our experiments are in Table A5 of the Supplementary materials. We also conducted new experiments with a virtual node connected to the entire virtual graph and with two virtual nodes, sampling one, two, or four configurations. Please see the next official comments for the results. In most practical scenarios that we have tested on, having more than 2 samples doesn’t positively affect the performance but can lower the standard deviation between runs. We see that using 4 samples decreases overall performance, but the standard deviation for the 5 runs is much lower. --- Once again, we thank the reviewer for their suggestions, and we kindly ask the reviewer to increase their score if they are satisfied with our response. We are happy to answer any remaining questions! --- Rebuttal Comment 1.1: Comment: I appreciate the author’s detailed response to my questions and comments, as well as the additional experimental results. However, I still have concerns about oversmoothing. Incorporating a measurement based on Dirichlet Energy might provide further insights. After considering the feedback from other reviewers, I have decided to maintain my current score. --- Rebuttal 2: Title: Rebuttal - New experimental results Comment: **Q1, Q2: New experimental results on the WebKB heterophilic datasets and on the heterophilic datasets proposed in [PV23]:** | Methods | Cornell &uarr; | Texas &uarr; | Wisconsin &uarr; | |-------------------------------|-------------------------|-------------| --- | | GINE | 0.448±0.073 | 0.650±0.068 | 0.517±0.054 | | DIGL [GR19] | 0.582±0.005 | 0.620±0.003 | 0.495±0.003| | Geom-GCN [PI20] | 0.608±N/A | 0.676±N/A | 0.641±N/A | | SDRF [TG21] | 0.546±0.004 | 0.644±0.004 | 0.555±0.003 | | DiffWire [ARZ22] | 0.690±0.044 | N/A | 0.791±0.021 | | GPS [RK22] | 0.718±0.024 | 0.773±0.013 | 0.798±0.090 | | Graphormer [YG21] | 0.683±0.017 | 0.767±0.017 | 0.770±0.019| | IPR-MPNN | __0.764±0.056__ | __0.808±0.052__ | __0.804±0.052__ | | Model | Roman-empire &uarr; | Tolokers &uarr; | Minesweeper &uarr; | Amazon-ratings &uarr; | |------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------| | GINE (base) | 0.476±0.006 | 0.807±0.006 | 0.799±0.002 | **0.488±0.006** | | IPR-MPNN (ours) | **0.839±0.006** | **0.820±0.008** | **0.887±0.006** | 0.480±0.007 | --- **Q4: Reduced runtime report, also available in Table A9 of the Appendix:** | Model | #Params | V. Nodes | Samples | Train s/ep | Val s/ep | Mem. usage | |------------|---------|----------|---------|---------------------|---------------------|------------| | GINE | 502k | - | - | 3.19 ± 0.03 | 0.20 ± 0.01 | 0.5GB | | GraphGPS | 558k | - | - | 17.02 ± 0.70 | 0.65 ± 0.06 | 6.6GB | | IPR-MPNN | 548k | 2 | 1 | 7.31 ± 0.08 | 0.35 ± 0.01 | 0.8GB | | IPR-MPNN | 548k | 4 | 2 | 7.37 ± 0.08 | 0.35 ± 0.01 | 0.8GB | | IPR-MPNN | 549k | 30 | 20 | 9.41 ± 0.38 | 0.43 ± 0.01 | 1.2GB | --- **Q6: Table comparing a virtual node connected to the entire graph (1VN - FC) with IPR-MPNN with two virtual nodes, one sample (2VN1S), two samples (2VN2S) and four samples (2VN4S)** | Model | ZINC &darr; | ogb-molhiv &uarr; | peptides-func &uarr; | peptides-struct &darr;| |---------------------|----------------------|----------------------|---------------------|---------------------| | 1VN - FC | 0.074 ± 0.002 | 0.753 ± 0.011 | 0.7039 ± 0.0046 | 0.2435 ± 0.0007 | | IPR-MPNN 2VN1S | 0.072 ± 0.004 | 0.762 ± 0.014 | 0.7146 ± 0.0055 | 0.2472 ± 0.0014 | | IPR-MPNN 2VN2S | **0.067 ± 0.001** | **0.788 ± 0.006** | **0.7210 ± 0.0039** | **0.2422 ± 0.0007** | | Model | peptides-func &uarr; | |----------|---------------------| | 1VN - FC | 0.7039 ± 0.0046 | | 2VN1S | 0.7146 ± 0.0055 | | 2VN2S | **0.7210 ± 0.0039** | | 2VN4S | 0.7145 ± 0.0020 | --- Rebuttal 3: Title: Rebuttal - References Comment: [GR19]: Gasteiger, J., Weißenberger, S., and Günnemann, S. "Diffusion improves graph learning." NeurIPS 2019. [PI20]: Pei, H., et al. "Geom-gcn: Geometric graph convolutional networks." ICLR 2020. [TG21]: Topping, J., et al. "Understanding over-squashing and bottlenecks on graphs via curvature." ICLR 2022. [ARZ22]: Arnaiz-Rodríguez, A., et al. "Diffwire: Inductive graph rewiring via the Lovász bound." LoG 2022. [RK22]: Rampášek, L., et al. "Recipe for a General, Powerful, Scalable Graph Transformer." NeurIPS 2022. [YG21]: Ying, C., et al. "Do Transformers Really Perform Bad for Graph Representation?" NeurIPS 2021. [PV23]: Platonov, O., et al. "A critical look at the evaluation of GNNs under heterophily: Are we really making progress?", ICLR 2023 --- Rebuttal 4: Title: Please Engage in Discussion Comment: Dear Reviewer UsNW, Thank you for your time and efforts throughout the review period. Please read the authors' rebuttal as soon as possible and indicate if their responses have addressed all your concerns. Best, Area Chair --- Rebuttal 5: Comment: We thank the reviewer for their suggestion. While our method is not tailored to alleviate over-smoothing, we agree that having evidence that our model does not _increase_ the phenomena of over-smoothing is beneficial. We have measured the Dirichlet Energy of the final layer on the heterophilic WebKB Cornell, Wisconsin, Texas datasets and on the roman-empire dataset from [PV23]. We compare IPR-MPNN with the base GINE model. The results are averaged for 5 runs, and we also report the standard deviations: | Model | Cornell ↑ | Wisconsin ↑ | Texas ↑ | Roman-Empire ↑ | |-------------|-----------------------|-----------------------|-----------------------|-----------------------| | GINE (base) | 8.34 ± 4.74 | 5.09 ± 3.11 | 7.41 ± 4.57 | 19.36 ± 1.16 | | IPR-MPNN | **12.10 ± 4.77** | **8.89 ± 2.67** | **9.65 ± 1.90** | **49.49 ± 2.41** | As can be observed, IPR-MPNNs have higher Dirichlet Energy on Cornell, Wisconsin, Texas and significantly higher on roman-empire, indicating that over-smoothing is alleviated for these datasets. Please note that IPR-MPNN also obtains much better overall results on these datasets (see previous comment). We will add these results to the final version of the paper. Overall, the empirical evidence indicates that IPR-MPNNs _do not_ increase over-smoothing, but they rather slightly alleviate the phenomena. Thank you for helping us improve our work. We kindly ask the reviewer to reconsider their assessment and increase their score if they find our response convincing. We are open to answering any remaining questions if time permits. --- Rebuttal Comment 5.1: Comment: Thank you to the authors for addressing my concern with the oversmoothing measurement. It has alleviated my concerns. I will maintain my score.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments and suggestions. We believe that our paper is much stronger after considering reviewer feedback. All of the new comparisons and experiments are included in the one-page pdf attached to the global response. --- In the following, we summarize the modifications that will be included in our final paper: **UsNW, bePC, 5s4E:** New experiments on heterophilic datasets (WebKB, [PV23]) - Tables 10, 11 in the pdf. We show that we are able to run our model on large graphs and that, in most cases, we obtain better results on heterophilic datasets when compared to baselines, indicating that IPR-MPNNs might alleviate over-smoothing. **UsNW, bePC, YQwu:** New experiments where we compare with a simple MPNN + VN and we change the number of samples that we use in Tables 13,15. We observe that, generally, having only two samples achieves very good performance with low computation overhead. We will also add a discussion regarding how we are different from MPNN+VN and about the benefits and costs of sampling to the final version of the paper. **UsNW:** We will fix the clarity issues on lines 183-191 and the confusing notation. **YQwu:** We will clarify how we compute the sensitivity in the final version of our paper. **YQwu:** We will add an algorithm table/pseudocode for IPR-MPNN in the final version of our paper. **5s4E:** We will add the three missing related works [ARZ22, VR23, BO24] and expand the discussions on [BE23, KR22]. We will also have a discussion regarding our connection to [GR24, SN24] - two concurrent works that were not available at submission time. **5s4E:** We will add a more nuanced discussion about possible bottlenecks in the virtual graph. **5s4E:** We added new comparisons with [ARZ22] in Tables 10 and 12, and with the concurrent works of [GR24, SN24] in Table 16. We show that we outperform [ARZ22, SN24] and are generally competitive with [GR24]. **5s4E:** We added a runtime and memory comparison with Drew [GE23] in Table 14. We are slightly faster than Drew but with slightly higher memory consumption. --- We would like to once again thank the reviewers for their feedback and for helping us significantly strengthen our paper. We are happy to respond to any additional questions during the discussion period. We kindly ask the reviewers to increase their scores if they are satisfied with our response. --- References: [PV23]: Platonov, O., et al. "A critical look at the evaluation of GNNs under heterophily: Are we really making progress?", ICLR 2023 [ARZ22]: Arnaiz-Rodríguez, A., et al. "Diffwire: Inductive graph rewiring via the Lovász bound." LoG 2022. [VR23]: Velingker, A. et al., “Affinity-Aware Graph Networks”, NeurIPS 2023 [BO24]: Barbero, F. et al., “Locality-Aware Graph Rewiring in GNNs”, ICML 2024 [BE23]: Banerjee, P.K. et al., “Oversquashing in GNNs through the lens of information contraction and graph expansion”, ALLERTON 2022 [KR22]: Karhadkar, K. et al., “FoSR: First-order spectral rewiring for addressing over-squashing in GNNs”, ICLR 2023 [GE23]: Gutteridge, B. et al., “DRew: Dynamically Rewired Message Passing with Delay”, ICML 2023 [GR24]: Geisler, S. et al., “Spatio-Spectral Graph Neural Networks”, arXiv 2024 [SN24]: Southern, J. et al. “Understanding Virtual Nodes: Oversmoothing, Oversquashing, and Node Heterogeneity”, arXiv 2024 Pdf: /pdf/3d4b6fad087f140ad140235a17191369baa5da46.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Exploratory Retrieval-Augmented Planning For Continual Embodied Instruction Following
Accept (poster)
Summary: The paper introduces an Exploratory Retrieval-Augmented Planning (ExRAP) framework, that tackles the problem of embodied planning in dynamic environments. The study focuses on continual instruction following, which is when the task consists of multiple conditional subtasks. The execution of each subtask is dependent on the environment state. This requires continual exploration to keep up-to-date information and integrated planning abilities for efficiency. ExRAP combines Large Language Models (LLMs) with environmental context memory represented as a temporal embodied knowledge graph (TEKG), that captures the states of objects in the scene and their pairwise relationships. The TEKG is updated at each step using the most recent observation and an update function $\mu$. Each subtask is translated into a pair of a query about the scene and a corresponding execution command with an instruction interpreter $\Phi_l$. Next, the memory-augmented query evaluator $\Phi_{M}}$ estimates the likelihood of the query being satisfied, using quadruples retrieved by $\Phi_R$ as environmental context information for the LLM. Authors enforce a temporal-consistency constraint on the evaluator, resulting in a continual steady decrease of the entropy of the answer. The execution commands corresponding to the satisfied queries are then passed to the execution part of the model. This part consists of exploitation and exploration planners. The former assesses the effectiveness of the skill in accomplishing the execution task. The latter one does the same with regard to reducing the uncertainty of the query evaluator $\Phi_M$. The performed skill is chosen as the one providing the maximum weighted sum of the two scores, balancing exploration and exploitation. In the experiments, the authors measure the Success Rate (SR) of the task completion and the Pending Step (PS), representing the average number of steps taken to complete the task. The evaluation is performed on the VirtualHome, ALFRED, and CARLA environments. Performance in various degrees of stationarity in the environment is assessed, showing the advantage of the method. The performance with respect to the number of instructions is also evaluated. The method seems to excel in integrated planning, effectively solving multiple tasks with fewer steps. Ablation studies demonstrate the importance of temporal consistency and exploration-integrated planning. Intrestingly, the study reveals a limited effect of choosing a larger LLM as a base model. Strengths: - To the best of my knowledge, one of the first works to tackle conditional embodied planning in a dynamically changing environment. - The results of the paper suggest that the method is clearly excels in producing more effective plans than the baselines, integrating multiple tasks in parallel. The plans are being adapted on the fly from the belief about the environment, and the belief is updated by neccessary exploration. This is a valuable contribution. Weaknesses: - The paper shows that the method is quite robust to the size of the base LLM. It could be a strength if there would be some training procedure possible. However, ExRAP relies exclusively on the in-context learning abilities of the LLM, thus the experiment shows that it scales poorly. The paper would benefit from an ablation study on the size of the retrieval dataset, so it can be shown that the method can scale with the context length of the model by increasing the number of examples. - The method excels at following multiple simple parallel conditional instructions, but it is not clear if it will be as good at the long-term tasks that require multiple steps to achieve, like the ones on which the LLM-Planner was tested. The advantage of having an adaptable planner may become a disadvantage against the baselines, as it may require more queries to the environment context memory, which will create a large computational overhead. It is unclear whether the method is better than the baselines in integrating multiple complex tasks, each of which requires multiple steps, into an optimal plan. Unfortunately, this is not investigated in the paper. - The applicability to the real-world setting is questionable and not investigated. Would the method scale appropriately with the number of objects in the scene? Methods like ConceptGraphs do not show robustness in that case. - The termin "continual embodied instruction following" is a little misleading, as in the paper, the instructions are passed all at once, and not at random times, which would require replanning. The "conditional embodied instruction following in the dynamic environment" could be a better name, though this is not a significant issue. Technical Quality: 3 Clarity: 2 Questions for Authors: - Are the environment settings used in the in-context learning retrieval dataset the same or different from the ones used for the test? - What does the observation $o_t$ consist of for ExRAP? Is it a list of quadruples? How would the setup be adapted for the real-world case? - How does the system handles contradicting instructions? Like “If there is a storm outside, turn off the electricity in the house. If the TV says that the storm is over, turn the electricity on.” - To me, it is not clear why the temporal consistency works. Would a simple baseline of reducing the temperature of prediction perform as good as the temporal consistency constraint in the paper? - It would be interesting to investigate the exploration dynamics of the agent. Does the exploration scores on average decrease steadily throughout the experience or more rapidly in some particular times? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: - Runtime overhead not investigated, but mentioned. - Applicability to the real world is not addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and insights. **W1** The scalablity of ExRAP. ExRAP uses the sentence embedding techniques such as DPR and BM25 for knowledge graph retriever and demonstration retriever for exploitation value. As shown in the table below, the average time taken for retrieval and LLM inference is about 14 times lower than the time it takes to infer actions through an LLM without retrieval, using an RTX A6000 GPU and an i9-10980XE. | Retrieval (Quadruple) | Retrieval (Demos.) | LLM Inference | LLM Inference (w\o retrieval) | |---|--|---|---| |0.021sec|0.024sec|2.203sec|31.243 sec| By selecting quadruples and demonstrations relevant to queries and instructions, it effectively reduces the context length and maintains the inference time of the LLM. In experiments, we retrieve only 3 demonstrations and 12 quadruples to generate prompts for each continual instruction, regardless of the size of the knowledge graph or the dataset. To clarify the retrieval models used, we will add these computational overhead results in Appendix. **W2** The complexity of the environment and computational overhead of ExRAP. For evaluation, we use continual instruction that involves multiple steps to achieve. For example, to address the single continual instruction "If you see a book somewhere unorganized, bring it to the sofa," the agent would need to execute the following steps: 0. initialized in livingroom 1. walk kitchen 2. walk bathroom 3. walk kitchen 4. walk bedroom (find book) 5. walk book 6. grab book 7. walk kitchen 8. walk livingroom 9. walk sofa 10. put book To effectively address complex continual instructions that require exploration, ExRAP performs continual instructions in an integrated approach. By doing so, ExRAP maximizes the effectiveness of the agent's actions for continual instructions, as evidenced by a reduction of 3.40 in the path length during experiments and resulting in a 16.45\% improvement in the success rate in experiments. In addition, as responded to in W1, ExRAP utilizes a retrieval-based approach to efficiently manage information. Even as the knowledge graph grows, it maintains the inference time of the LLM and minimizes computational overhead through retreival-based approach. **W3** The applicability to the real-world setting. The environments used in our experiments (VirtualHome, ALFRED, and CARLA) are highly similar to real-world settings and are frequently used in prior works proposing embodied planning or LLM-based frameworks [1, 2, 3]. For example, in VirtualHome, a single scene contains between approximately 20 to 100 objects, and there are 50 types of room structures, each with varying object states and positions. In ExRAP, a temporal embodied knowledge graph is used to efficiently manage and retrieve information on changing object states, enhancing adaptability and operational efficiency in dynamic environments. [1] Song, Chan Hee, et al. "Llm-planner: Few-shot grounded planning for embodied agents with large language models." Proceedings of the IEEE/CVF ICCV. 2023. [2] Singh, Ishika, et al. "Progprompt: Generating situated robot task plans using large language models." 2023 IEEE ICRA. IEEE, 2023. [3] Yang, Ruoxuan, et al. "Driving Style Alignment for LLM-powered Driver Agent." arXiv preprint (2024). **W4** Considering the term "continual embodied instruction following". Please see general response, Q2 **Q1** Are the environment settings used in the in-context learning retrieval dataset the same or different from the ones used for the test? They are different. The demonstrations used in in-context learning and the test environments differ in the structure of rooms and positions of objects in VirtualHome and ALFRED, and the locations of buildings in CARLA. **Q2** What does the observation $o_t$ consist of for ExRAP? Is it a list of quadruples? How would the setup be adapted for the real-world case? Please see general response, Q1 **Q3** How does the system handle contradicting instructions? In this paper, we focused on optimizing multiple continual instructions and did not address conflict resolution. ExRAP can be expanded into a neural-symbolic approach [4] with a temporal embodied knowledge graph, utilizing various research to identify and resolve contradictions [5, 6]. The question by the reviewer addresses a critical aspect of the multiple continual instructions and real-world implementation, and it is one of the future works for ExRAP. We include this content in the conclusion section to emphasize its importance. [4] Liu, Xiaotian et al. "A planning based neural-symbolic approach for embodied instruction following." Interactions 9.8 (2022): 17. [5] Li, Jierui et al.. "ContraDoc: Understanding Self-Contradictions in Documents with Large Language Models." arXiv preprint (2023). [6] Wan, Alexander et al. "What Evidence Do Language Models Find Convincing?." arXiv preprint (2024). **Q4** To me, it is not clear why the temporal consistency works. Would a simple baseline of reducing the temperature of prediction perform as well as the temporal consistency constraint in the paper? The problem of maintaining temporal consistency in LLMs is not resolved simply by lowering the temperature setting. As shown in Appendix D.1, LLMs often fail to reflect the increasing uncertainty of information over time, and this issue can occur even when the temperature is set to zero, particularly in small LLMs. To address this issue in LLMs, we employ repetitive reasoning and critic to enforce temporal consistency. This approach helps to ensure that the model maintains consistency over time, effectively capturing and reflecting temporal dynamics even in small LLMs. **Q5** It would be interesting to investigate the exploration dynamics of the agent. Do the exploration scores on average decrease steadily throughout the experience or more rapidly in some particular times? Please see the general response Q3, and PDF file --- Rebuttal Comment 1.1: Comment: Thank you again for your considerate review. As the end of the author-reviewer discussion period approaches, we summarize the key points of our responses to the reviewer. 1. The scalablity of ExRAP - ExRAP uses the sentence embedding techniques such as DPR and BM25 for knowledge graph retriever and demonstration retriever for exploitation value. Even as the knowledge graph grows, it maintains the inference time of the LLM and minimizes computational overhead through retreival-based approach. By selecting quadruples and demonstrations relevant to queries and instructions, it effectively reduces the context length and maintains the inference time of the LLM. In experiments, we retrieve only 3 demonstrations and 12 quadruples to generate prompts for each query and execution, regardless of the size of the knowledge graph or the dataset. To clarify the retrieval models used, we will add these computational overhead results in Appendix. 2. The applicability to the real-world setting. - The environments used in our experiments (VirtualHome, ALFRED, and CARLA) are highly similar to real-world settings and are frequently used in prior works proposing embodied planning or LLM-based frameworks. For example, in VirtualHome, a single scene contains between approximately 20 to 100 objects, and there are 50 types of room structures, each with varying object states and positions. - In ExRAP, a temporal embodied knowledge graph is used to efficiently manage and retrieve information on changing object states, enhancing adaptability and operational efficiency in dynamic environments. 3. Investigatation for the exploration dynamics of the agent. - Figures 1 in the PDF file illustrate the changes in exploration value for a single query during the execution of continual instructions. We observed that the exploration value increased steadily over time and decreased rapidly once information relevant to the query was collected. Specifically, we noted that the increase in exploration was more larger in environments with higher non-stationarity, leading to enhanced exploration. This, in turn, resulted in a more frequent drop in exploration value. As the reviewer suggested, investigating the exploration dynamics is an interesting analysis that demonstrates how ExRAP improves performance. We will add this to the Appendix of the final version. The reviewer's suggestion to analyze exploration dynamics offers valuable insight by demonstrating the characteristics of our agent's exploration, which helps readers' understanding. Additionally, the discussion on contradicting instructions points to potential approaches for resolving conflicts and offers constructive comments for our future work. We will do our best to address any questions the reviewer may have until the end of the reviewer-author discussion period. Please feel free to ask any further questions. Thank you.
Summary: This paper presents the Exploratory Retrieval-Augmented Planning (ExRAP) framework to address the challenge of continual instruction following in non-stationary embodied environments. ExRAP enhances the reasoning capabilities of Large Language Models (LLMs) by efficiently exploring the environment and maintaining an environmental context memory to ground the task planning process. The paper's main contributions are: 1. A novel ExRAP framework integrating LLMs, memory-augmented reasoning, and information-based exploration for continual instruction following. 2. Temporal consistency refinement and information-based exploration schemes tailored for ExRAP's integrated planning approach. Strengths: The memory-augmented query evaluation and exploration-integrated planning framework provide a principled way to bridge the gap between high-level language understanding and low-level embodied decision-making. The temporal consistency refinement scheme is a novel contribution in embodied contexts that addresses the important issue of information decay in memory-based solutions over time. Using a temporal embodied knowledge graph (TEKG) as an environmental context memory enables the agent to efficiently assess the satisfaction of instruction conditions without constantly revisiting the environment. Weaknesses: 1. ExRAP's performance still somewhat depends on the underlying LLM's reasoning abilities. While the ablation study shows robustness to smaller LLMs, it's unclear how the framework would perform with much weaker language models or in domains where the LLM's knowledge is limited. 2. The environments' non-stationary aspects are limited to a single instruction changing at fixed intervals. This does not fully capture the complexity and unpredictability of real-world non-stationary environments, where goals, conditions, and constraints can evolve in more intricate ways. 3. The instructions used in the experiments seem to be largely independent of each other, without any significant long-term dependencies or relationships between tasks. An example from Table A.1 is: "If no one is watching the TV, turn it on." If you have an apple somewhere, bring it to your desk. "If you see a book somewhere unorganized, bring it to the sofa." These instructions in the table appear to be independent tasks without any clear long-term dependencies or relationships between them. Each instruction can be completed independently of the others. That being said, the ExRAP framework does take important steps by incorporating memory-augmented reasoning and exploration-integrated planning. These components provide a way for handling evolving instructions and non-stationary environments. The authors' claims may be somewhat overextended given the current experimental setup, but the work still makes valuable contributions to embodied instruction following. Technical Quality: 3 Clarity: 3 Questions for Authors: If authors can address points 2 and 3 from weaknesses, it will help in understanding whether there is a distinction between concurrent or multi-task instruction following and truly continual instruction following. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors have addressed some of the limitations of the work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and insights. **W1** ExRAP's performance still somewhat depends on the underlying LLM's reasoning abilities. While the ablation study shows robustness to smaller LLMs, it's unclear how the framework would perform with much weaker language models or in domains where the LLM's knowledge is limited. When LLM performance degrades to a level where knowledge-based reasoning becomes unfeasible, a decline in ExRAP's performance is inevitable. However, as the reviewer commented, ExRAP demonstrates robust performance even in smaller LLMs, as shown in the ablation study (Section 4.2). If implementation with much weaker language models or in domains where the LLM's knowledge is limited is required, techniques such as imitation learning [1] or knowledge distillation [2] could be utilized to address these challenges. [1] Das, Abhishek, et al. "Embodied question answering." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. [2] Choi, Wonje, et al. "Embodied CoT Distillation From LLM To Off-the-shelf Agents." Forty-first International Conference on Machine Learning. **W2** The environments' non-stationary aspects are limited to a single instruction changing at fixed intervals. This does not fully capture the complexity and unpredictability of real-world non-stationary environments, where goals, conditions, and constraints can evolve in more intricate ways. At every change interval, the position or state of some objects related to an instruction is changed randomly. While the overall environment changes at fixed intervals, the specific objects that change at each interval are variable. This means that for each single continual instruction, the changes do not occur at fixed intervals. We will add these details comprehensively to the Appendix to clarify this aspect. Thank you for your comment. **W3** The instructions used in the experiments seem to be largely independent of each other, without any significant long-term dependencies or relationships between tasks. An example from Table A.1 is: "If no one is watching the TV, turn it on." If you have an apple somewhere, bring it to your desk. "If you see a book somewhere unorganized, bring it to the sofa." These instructions in the table appear to be independent tasks without any clear long-term dependencies or relationships between them. Each instruction can be completed independently of the others. We propose a framework that efficiently integrates and executes shared subtasks among continual instructions, even when each instruction appears to exist independently. For example, while moving to check if no one is watching TV, ExRAP can simultaneously verify the location of a book, or pick up a nearby apple and place it on the table while en route to turn on the stove. In response to the reviewer's comments, we conduct experiments on scenarios where query conditions explicitly overlap, or where the execution of one continual instruction explicitly impacts other instructions, such as "When computer is off, the mug should always be on the coffeetable, If your computer stays on, turn it off". | Model | SR ($\uparrow$) | PS ($\downarrow$) | |--------------|-----------------|-------------------| | LLM-Planner | 40.11% | 15.80 | | ExRAP | 51.36% | 12.59 | --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. The response addressed my concerns. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you once again for your insightful feedback and thoughtful comments. In particular, the discussion on continual instructions with long-term dependencies or relationships between them is invaluable. This results highlight our ExRAP's capability not only to handle independent continual instructions with implicit overlapping subtasks but also to perform effectively in scenarios where these instructions influence each other, emphasizing the generality and scalability of our approach. We will include these additional experimental results and the clarifications requested by the reviewers in the final version of our manuscript. Your review is constructive regarding our approach. Thank you!
Summary: The paper considers a problem of continual instruction following where an instruction contains multiple query-execution pairs to be performed. For this, the paper proposes ExRAP comprised of two components: query evaluation and exploration-integrated task planning. Query evaluation incrementally updates the temporal embodied knowledge graph and selects a set of the most relevant executions. The exploration-integrated task planning part uses both exploration and exploitation by weight-summing the predicted values of predefined skills whose weights are hyperparameters. The performance in the experiment shows noticeable margins compared to the baseline models. Strengths: - The paper is generally written well and easy to follow. - The paper tackles an important and challenging problem of continual instruction following. - Multiple benchmarks are explored for the proposed problem setup, implying generalizability of the setup, yet posing some scale issue (see weaknesses). Weaknesses: - While agreeing that addressing an instruction that contains multiple tasks, it is unclear why it should be in the explicit form of query(condition)-execution pairs. This is not well-justified. In addition, can't it be implicit? - Regarding the first question, it looks like the proposed approach exploits the query-execution format of the proposed task setup (*e.g.*, query evaluator). - The scale of the continual instruction following dataset is a bit limited. All the 3 datasets use less than 20 continual instructions, which may potentially raise a generalizability concern. - High errors are observed across many tables in the experiments. - Why is this the case? Can this be related to the small scale of the dataset? - One argued contribution is a good performance of the proposed approach, but due to the high errors, it is unclear whether the argued outperformance is valid. - The paper introduces many hyperparameters but it is unclear how much the proposed approach is sensitive to the choice of them. - How to obtain the knowledge graph? is it obtained as GT values or predicted by some learned perception modules? - In L184, why do the authors posit $H(R(q|G_{t-1})) > H(P(q|G_{t-1}))$? Any intuition behind this? - Considering that the term "continual" also refers to learning some new tasks sequentially (*e.g.*, RL [a], IL [b], or even EIF [c]). Discussion about this aspect seems needed to clarify the usage of "continual" in this paper. References\ [a] Xie and Finn. Lifelong robotic reinforcement learning by retaining experiences. CoLLAs, 2022.\ [b] Gao et al. Cril: Continual robot imitation learning via generative and prediction model. IROS, 2021.\ [c] Kim et al. Online continual learning for interactive instruction following agents. ICLR, 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do not address the potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and insights. **W1**. Should the continual instructions be explicit? The continual instructions are not always in the explicit form of query-execution pairs; they can also be implicit, such as "Always leave the stove open" in Appendix A.1. In that case, they can be interpreted and executed conditionally, "Query: stove is closed, Execution: open the stove" by an instruction interpreter. Additionally, In our main text, within section 4.1 on Instruction type (Lines 276-284), we demonstrated that tasks could still be performed with minimal performance degradation even when multiple continual instructions are summarized or when objects are ambiguous. **W2** Exploitation of the query-execution format. We utilize an instruction interpreter to transform continual instructions, in which each instruction is either implicit or explicit, into a query-execution format. To address various format of continual instructions, we use LLM as the instruction interpreter. In the fields of real-time databases, event-based query processing, and monitoring system applications, it is common to separate user requests into query and execution components [1, 2]. Inspired by these processing methods, we proposed an ExRAP framework that uses the query-execution structure for continual instructions. [1] Shaikh, Salman Ahmed, et al. "Smart query execution for event-driven stream processing." 2016 IEEE Second International Conference on Multimedia Big Data (BigMM). IEEE, 2016. [2] Liu, Ling, Calton Pu, and Wei Tang. "Continual queries for internet-scale event-driven information delivery." IEEE Transactions on Knowledge and Data Engineering 11.4 (1999): 610-628. **W3** The scale of the continual instructions. For evaluation, we construct the continual instructions by sampling 3, 5, or 7 instructions from a set of 19 continual instructions. Choosing 5 of these 19 possible continual instructions results in 11,628 unique combinations. We also evaluate the types of instructions to demonstrate the generalization performance across different scenarios, ensuring that our approach can effectively handle a diverse range of instructions. **W4** High errors are observed across many tables in the experiments. The time-varying features of the environment and the large number of combinations of continual instructions contribute to a large variance in performance. Despite this, the experimental results remain valid. As shown in Table 1 of the main text, the confidence intervals for the experiments in the ALFRED and CARLA environments do not overlap with those of the baselines, indicating that our results are statistically significant. To minimize experimental error, we will add to increase the random seeds from 5 to 10 used in the experiments. The below table shows the performance of the ExRAP and other baselines with 10 random seeds in medium non-stationarity, VirtualHome. As the error bars decrease sufficiently and the overlap with the baselines disappears, we can observe that the results become statistically significant. | Model | SR ($\uparrow$) | PS ($\downarrow$) | |----|-----|------| | ZSP | 20.06% ± 1.93% | 32.06 ± 4.66 | | SayCan | 33.69% ± 5.36% | 21.81 ± 4.14 | | ProgPrompt | 30.51% ± 5.31% | 23.43 ± 1.07 | | LLM-Planner | 39.89% ± 4.52% | 15.93 ± 2.13 | | ExRAP | 55.14% ± 6.59% | 11.33 ± 1.92 | **W5** Hyperparameters sensitivity. In our experiments, the hyperparameters that impact performance are the exploration value coefficient $w_R$ and the exploitation value coefficient $w_T$. The table below exhibits robust performance across a wide range of hyperparameter settings. When the exploitation value coefficient is high, exploration may be reduced, leading to a slight decrease in performance. However, the advantage of integrated planning across multiple instructions still results in comparable performance. When the exploration value coefficient is high, an excessive focus on gathering information can also lead to a slight decrease in performance. We will add these results to the Appendix for the final version. | $w_R: w_T$ | SR ($\uparrow$) | PS ($\downarrow$) | |------------|-----------------|-------------------| | 1000:1 | 49.70% | 14.67 | | 100:1 | 55.14% | 11.33 | | 10:1 | 51.33% | 13.98 | | 1:1 | 42.62% | 16.66 | **W6** How to obtain the knowledge graph? is it obtained as GT values or predicted by some learned perception modules? Please see general response, Q1 **W7** The intuition of $H(R(q|G_{t-1})) > H(P(q|G_{t-1}))$. This is based on the intuition that the uncertainty of information is increased over time in time-varying environments. For instance, if the location of an object is observed at step $t-1$, the uncertainty about the object's location may stay the same (not moved) or increase by step $t$ in a non-stationary environment. Consequently, the uncertainty associated with the knowledge graph $G_{t-1}$ increases monotonically by the time it reaches step $t$. Therefore, when evaluating queries based on this knowledge graph, the uncertainty in the query response also increases. This approach is the same as common practices in fields such as sensor databases and monitoring systems, where dealing with data uncertainty is crucial [3, 4]. The formula should be corrected to $ H(R(q|G_{t-1})) \geq H(P(q|G_{t-1}))$. Thank you for detailed review. [3] Prasad Sistla, A., et al. "Querying the uncertain position of moving objects." Temporal databases: research and practice (1998): 310-337. [4] Cheng, Reynold, and Sunil Prabhakar. "Managing uncertainty in sensor database." ACM SIGMOD Record 32.4 (2003): 41-46. **W8** Considering the meaning of the word "continual." Please see general response, Q2 --- Rebuttal Comment 1.1: Title: Response by Reviewer K6e1 Comment: Thank you for your detailed response. The response addressed my concerns but a few things still remain unclear. - For W1, "Always leave the stove open" seems not implicit but explicit with the condition that is always `True`. The current continual instruction proposed is usually in the form of condition-execution pairs (i.e., if something is happening, then do something). My concern is why we need the "two", condition and execution. For instance, can't it be done with only execution? - For W8, thank you for the clarification, but this can be quite confusing and it might be better to clarify this somewhere in the main paper. --- Rebuttal 2: Comment: Thank you for your kind feedback and the additional questions. Our responses are as follows: **For W1.** - Thank you for valuable insight. Our ExRAP can also be applied in situations where instructions consist solely of execution. Below table shows the performance of simple experiments on instructions that involve only execution and no conditions. Although the need for exploration has been reduced, ExRAP demonstrates an advantage over the baseline by efficiently executing multiple instructions through integrated task planning. We will include this results and experiments comparing the entire baselines in the Appendix of the final version. | Model | SR (↑) | PS (↓) | | --- | --- | --- | | LLM-Planner | 74.17% | 8.80 | | ExRAP | 79.86% | 7.20 | - As the reviewer mentioned, in environments where only the robot agent exists and there are no changes over time, the condition of "Always leave the stove open" can be True. However, in dynamic environments where other agents or users can interact with the stove, the instruction would need to be divided into a condition and execution: "If the stove is closed", then "open the stove". For instructions that appear to only involve execution, such as "put the mug in the sink",ExRAP addresses this by relating the condition to the goal, such as “if the mug is not in the sink". If we treat the condition as always true, it could force the repetition of an already completed task, potentially leading to performance degradation. These examples are considered important for explaining our framework, and we will ensure to include them in our paper. - Converting to a query-execution format becomes a crucial part in optimizing exploration in ExRAP. However, simply separating query and execution does not fully resolve the continual instruction following problem in time-varing dynamic environment. We manage environmental observations through an knowledge graph-based approach, and efficiently utilize information through retrieval to evaluate queries. Additionally, an integrated exploration and exploitation planner enables efficient planning through multiple instructions. **For W8.** - Thank you for your thoughtful feedback. We will add the following statement to the main text, Line 108: “The continual instruction does not refer to continual learning, which learns some new task sequentially. It aligns closely with the concept of continual query, which are standing queries that monitor update of interest and return results whenever the update reaches specific thresholds.” We are very pleased that the reviewer's concerns and most of the identified weaknesses have been addressed. If you have any additional questions, please feel free to ask. Thank you! --- Rebuttal Comment 2.1: Title: Thank you for the response. Comment: I thank the authors for the response with the additional experiment. The response addressed my concerns and I am happy to increase the score. --- Reply to Comment 2.1.1: Title: Thank you! Comment: Thank you for raising your score. We truly appreciate your consideration and constructive comment and feedback. We will incorporate this discussion and additional experiments into the final version. While our paper focus on solving continual instruction, your suggestion to extend our experiments to include what appears to be purely execution-based instruction is insightful feedback that can demonstrate the generality of our model. Thank you once again for your response. The discussion with you is very meaningful and greatly encourages us.
null
null
Rebuttal 1: Rebuttal: **General Response** We thank the reviewers for their valuable feedback. We are encouraged that they found our approach is novel, well-written, easy to follow, adresses the important and challenging problem, and provides valuable contribution, as well as providing extensive evaluations which demonstrate the effectiveness of our approach in multiple benchmarks. We addressed all weaknesses and questions to resolve the concerns raised. For those areas where the reviewer requested additional information, we conducted further studies and provided detailed explanations to enhance overall understanding. **Q1** Consideration the observation of environment. (Reviewer K6e1, Reviewer kQXj) In alignment with prior works on LLM-based agents [1, 2, 3, 4], the agent collects text-based observations about objects visible within its field of view or through interaction with objects. It is then simply transformed into quadruples (e.g., "book is on the table" becomes ("book", "on", "table")). ExRAP constructs a temporal embodied knowledge graph using these observations. However, addressing the reviewer's concern, we conducted experiments under the assumption that the agent collects observations via camera and utilizes an object detection module or Vision-Language Models (VLMs) for text observation, resulting in only partial information being detected when observing the environment. Below table demonstrates that even with a decrease in detection probability, sufficient performance is maintained. | Detection Prob. | SR ($\uparrow$) | PS ($\downarrow$) | |-----------------|-----------------|-------------------| | 80% | 50.77% | 13.52 | | 90% | 52.97% | 12.38 | | 95% | 54.71% | 11.84 | | 100% | 55.14% | 11.33 | [1] Song, Chan Hee, et al. "Llm-planner: Few-shot grounded planning for embodied agents with large language models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Xiang, Jiannan, et al. "Language models meet world models: Embodied experiences enhance language models." Advances in neural information processing systems 36 (2024). [3] Lin, Bill Yuchen, et al. "On grounded planning for embodied tasks with language models." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 11. 2023. [4] Singh, Ishika, et al. "Progprompt: Generating situated robot task plans using large language models." 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023. **Q2** Consideration the term "continual embodied instruction following". (Reviewer K6e1, Reviewer kQXj) As noted in the main text (Lines 100-108), "continual instruction" is defined as multiple instructions that need to be performed continuously and simultaneously. So, the term "continual" as we use it does not refer to continual learning, which learns some new task sequentially. Rather, this usage aligns closely with the concept of continual query [6, 7], which are standing queries that monitor update of interest and return results whenever the update reaches specific thresholds. Therefore, continual instruction following refers not merely to following conditional instructions but to a broader problem definition. It includes the need to continuously execute instructions in response to the environment and to simultaneously address multiple instructions. **Q3** It would be interesting to investigate the exploration dynamics of the agent. Do the exploration scores on average decrease steadily throughout the experience or more rapidly in some particular times? (Reviwer kQXj) Figures 1 in the PDF file illustrate the changes in exploration value for a single query during the execution of continual instructions. We observed that the exploration value increased steadily over time and decreased rapidly once information relevant to the query was collected. Specifically, we noted that the increase in exploration was more larger in environments with higher non-stationarity, leading to enhanced exploration. This, in turn, resulted in a more frequent drop in exploration value. As the reviewer suggested, investigating the exploration dynamics is an interesting analysis that demonstrates how ExRAP improves performance. We will add this to the Appendix of the final version. Thank you for the suggestion. **Q4** The potential negative societal impact. Our work does not involve activities associated with negative societal impacts, such as disseminating disinformation, creating fake profiles, or conducting surveillance. Therefore, we do not expect any negative societal impacts from our research. We will add this statement explicitly in the final version. Pdf: /pdf/8cd7957fc15cf6667bfae2605a667cba29301acd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Tolerant Algorithms for Learning with Arbitrary Covariate Shift
Accept (spotlight)
Summary: This paper studies the problem of PAC learning with covariate shift. In particular, it examines two specific learning frameworks: one is PQ learning, where a learner is allowed to "absent" from some testing samples but is required to have good accuracy for the retained samples; the other is TDS learning, where one is allowed to completely absent if the testing distribution is detected to be far from the training distribution. From a technical point of view, the paper is restricted to cases when the covariate distribution is Gaussian or uniform and when the concept class is nicely approximated by polynomials of low degree (i.e., the so-called "sandwiching" functions). The main contribution appears to extend prior results to handle arbitrary fractions of outliers and provide tighter bounds. Strengths: While I'm not an expert in PAC learning using polynomial regression, this paper appears to provide some interesting technical results. The outlier removal algorithm introduced in Theorem 3.1 seems to be an interesting technical primitive for handling outliers in low-degree polynomial regressions. Although the settings considered in this paper are fairly restrictive, the results seem to provide a valuable step toward understanding learning with abstention in covariate shift. Weaknesses: The main weakness of the paper, in my opinion, is the presentation. The paper is very hard to read, especially for those who are not immediately familiar with the relevant literature. I outline the following specific comments: 1. The main text as written does not provide much technical information. For example, Theorem 3.1 is supposed to be the main technical ingredient of the paper, but the proof overview provides nearly no information. Due to the limited time period of the NeurIPS review, I don't have time to delve into the technical details provided in the appendix. I suggest the authors provide a much more detailed overview in the main text. It does not necessarily need to include all the technical details, but it should provide enough information on the logical flow. Perhaps the "Comparison with [DKS18]" section can be omitted to save space? 2. The theorem statements are sometimes framed very sloppily. For example, the $\lambda$ in Theorem 4.1 was never defined (though it appears in the proof); in Lemma B.2., the sentence "we have f(x) = 0 for all x such that..." does not make sense to me, as the subsequent quantifier has nothing to do with x, referring to f(x)=0 always. Since there are too many sloppy statements like this, I lack the energy to check the details carefully. 3. The authors claim to provide "efficient learning algorithms," but all the computational complexities scale exponentially with respect to the error parameters $\epsilon$. Did I miss something? Can this parameter be selected as a constant and be boosted in some way? 4. Is there a specific reason why Theorem 3.1 is stated for the second moment? Is it due to Lemma B.1? Given these reasons, although I believe this paper has some interesting technical contributions, it would require substantial revision to be suitable for publication at NeurIPS. Technical Quality: 3 Clarity: 2 Questions for Authors: See above. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort. **Question 1.** We note that, while we provide most of the technical details of our proofs in the appendix, we do provide a high-level explanation of our results in the main part, including Theorem 3.1 (see lines 189–223). In fact, the comparison with [DKS18] partly explains the technical differences between Theorem 3.1 and results on outlier removal from prior work. Given the reviewer’s feedback, however, we will add some further technical details in the main paper. **Question 2.** We respectfully disagree with the reviewer’s comment that our statements lack precision. Note that the parameter $\lambda$ is defined in the preliminaries (lines 146–148). Moreover, in Lemma B.2, the premise regarding $f$ is that $f(x) = 0$ for any $x$ that satisfies some property (the one described in line 693, i.e., that $p(x)^2$ is more than $B$ for some polynomial $p$ with low second moment) and not all $x$. We will clarify these points in the main paper and we are happy to address any further specific concerns that the reviewer may have. **Question 3.** The algorithms we propose are efficient in the dimension of the input, which is a standard goal in learning theory (see, for example, the classical work of [KKMS08]). In the presence of label noise, the exponential dependence on the parameter $\epsilon$ cannot be removed, even if one assumes that there is no distribution shift (see [DKPZ21]). Additionally, when there is no label noise, the algorithm of Theorem 4.3 for PQ learning of halfspaces is quasi-polynomial in all parameters and is optimal in the SQ framework (see [KSV24a]). **Question 4.** Theorem 3.1 is stated for the second moment for two reasons. First, using the second moment enables the use of spectral methods for the outlier removal procedure and is a common approach for prior results on outlier removal. Second, it is sufficient for our purposes, since the difference of the upper and lower $\mathcal{L}_2$ sandwiching approximators is itself a polynomial with low second moment under the target distribution. Subsequently, we can use the outlier removal procedure to find a subset of the input set over which the bound on the squared difference of the sandwiching approximators is preserved. *[KKMS08] Adam Tauman Kalai, Adam R Klivans, Yishay Mansour, and Rocco A Servedio. Agnostically learning halfspaces. SIAM Journal on Computing, 37(6):1777–1805, 2008.* *[DKPZ21] Diakonikolas, Ilias, Daniel M. Kane, Thanasis Pittas, and Nikos Zarifis. "The optimality of polynomial regression for agnostic learning under gaussian marginals in the SQ model." In Conference on Learning Theory, pp. 1552-1584. PMLR, 2021.* *[KSV24a] Adam R Klivans, Konstantinos Stavropoulos, and Arsen Vasilyan. Learning intersections of halfspaces with distribution shift: Improved algorithms and sq lower bounds. 37th Annual Conference on Learning Theory, COLT, 2024.* --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications. I raise my rating to 6. Since I'm not an expert in this particular field, I will have no objection if the other reviewers lean towards acceptance.
Summary: The authors consider the problem of efficiently learning a concept class under distribution shift, i.e. in the setting where the training data and testing data are drawn from different distributions. They study two frameworks: PQ learning and TDS learning. In the former, the learner is allowed to abstain from classifying a part of the test samples. In the latter, the learner is allowed to abstain entirely if the test distribution is 'visibly' different from the training distribution. The paper has two main contributions. First, in the PQ setting, the authors obtain the first dimensionally-efficient learning algorithms, under the assumption that the training data is nicely distributed (Gaussian or uniform on the cube) and the concept class is simple (intersection of halfspaces or low-complexity formulas). Second, in the TDS setting, under the same assumptions, they provide the first efficient algorithms that tolerate small distribution shift in TV-distance. This generalizes earlier results which only tolerate shift 'in the moments' of the distribution (which is a weaker notion). The proof of these results consists roughly of two steps. First, the authors adapt/improve an existing spectral technique for outlier-removal to show that the test data can be pruned so that it satisfies a 'low-degree spectral boundedness property', without removing too many samples. Second, they show that this spectral boundedness property suffices to apply low-degree polynomial regression to the PQ/TDS-learning problems (in certain settings). In order to do so, they rely on the notion of 'L2-sandwiching polynomials' of [KSV24b]. The important distinction w.r.t. [KSV24b] is that, there, the authors rely on a 'moment-matching property' (which is stronger than spectral boundedness, and in particular, not always satisfied even if the training and test data are close in TV-distance). Strengths: - The paper achieves strong and novel results in an area which I believe to be of interest to the Neurips community. It falls in a line of research focussing on relaxing (or testing) distributional assumptions in algorithmic learning theory, which has recently received a lot of interest. In particular, I like that the paper goes beyond the 'moment-matching' approach of earlier works. - I find the combination of ideas from [KSV24b] and outlier removal to obtain the main results very insightful. - The paper is very well written: the key technical ideas are exposed clearly, and it is easy even for the non-expert to graps the main concepts. Furthermore, the results and some of their consequences are positioned clearly in the existing literature. Lastly, the technical background (on learning models, previous algorithms etc.) is well presented. - The work leaves open some directions for future research, and I think it is likely that its ideas will be applied to obtain future results in this area. Weaknesses: - Arguably, much of the technical 'heavy-lifting' to obtain the results of this paper is done by the 'L2-sandwiching' of [KSV24b], and to a lesser extend the outlier removal technique of [DKS18] (which the authors do significantly improve on). - The results apply only to very simple concept classes. It would have been nice to have guarantees in more complicated settings as well, e.g. for (low-degree) polynomial threshold functions. Similarly, the distributional assumptions on the training data are quite restrictive (although it should be noted that this is not uncommon in the literature). Technical Quality: 4 Clarity: 4 Questions for Authors: - l62-63: In testable learning, one does not test whether the underlying distribution is actually Gaussian (which is not possible), but rather whether it shares some important characterics with a Gaussian distributions (e.g. its low-degree moments match). I guess the word 'indirectly' is meant to allude to this distinction, but I think this phrasing could be confusing. - l111-117: It is not clear to me from this paragraph if any work on *distribution-specific* PQ-learning was done before this paper. In particular, it think it would be good to clarify if 1) this work gives the first *efficient* distribution-specific PQ algorithm (but inefficient *distribution-specific* algorithms had been considered, or 2) this work is the first to consider distribution-specific PQ-learning at all. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and appreciation of our work. - Lines 62-63: We will clarify this point, thank you for pointing out. - Lines 111-117: Our work is indeed the first to consider distribution-specific PQ-learning at all. Prior work involved reductions to other learning primitives like agnostic or reliable agnostic learning, but the reductions did not preserve the marginal distributions and, hence, the resulting algorithms were inherently distribution-free.
Summary: This work proposes methods for learning under covariance shifts. In particular, it studies PQ learning and TDS learning models. It provides an algorithm based on filtering technique. Given sample access to two distributions, the algorithm produces a filter that rejects points from the second distribution which attains large values on some degree-k polynomial, and does not reject many values of the first distribution. This algorithm is then applied to learning some function classes (halfspaces and classes with low sandwiching degree) in PQ learning model and to tolerant TDS learning model. In the former model authors obtain a computationally-efficient algorithm, and in the latter an algorithm which does not reject a distribution even if there exists a small but non-negligible shift. Strengths: 1. Paper provides novel results in the PQ learning and TDS learning. Solving these learning models is important, since in practice algorithms need to perform well under covariate shifts. 2. Core algorithm of the paper is versatile, as the authors show how to apply it to multiple other learning settings (e.g. learning with nasty noise). 3. For TDS learning, prior works rejected distribution with a very small distributional shift. This work provides an algorithm which gives non-trivial results even if the distance between distributions is larger. 4. This work suggests a novel technique to bound number of iterations of their algorithm through the use of potential function. 5. The main text of the paper is well-written. Weaknesses: I find the main weakness of the paper is the quality of the proofs in the appendix. They are often inaccurately written and require a lot of time to understand the arguments because of the typos. But it seems that there are some inaccuracies in the proofs of the core results beyond typos, which I highlight in the 'Questions' section. Technical Quality: 2 Clarity: 2 Questions for Authors: My main question is about Appendix B, which I believe is crucial for the proof of all technical theorems. I find this section not carefully written with inconsistent notation and typos, which makes it hard to follow the arguments. Next, I focus on the parts which were not merely typos: 1. I do not follow equation (B.2). First of all, it should be square of the Frobenius norm. But still second and third step is unclear. Could the authors explain these steps in more detail? 2. If I understand correctly, equation (B.3) requires $2k$-tameness, instead of $k$-tameness. This is because $r_j(x)r_{j’}(x)$ are of degree $2k$. 3. Equation (B.4): I do not follow first inequality, where does first $\Delta$ come from? Since $f(x)(p(x))^2 = \sum_{j = 0}^{B / \Delta} f(x)(p(x))^2 \mathbb{I}(j\Delta \leq (p(x))^2 < (j+1)\Delta)$, we obtain that $\lvert E_{x\sim S} f(x) (p(x))^2 - E_{x\sim D'} f(x)(p(x))^2\rvert \leq \sum_j \lvert E_{x\sim S} f(x) (p(x))^2 \mathbb{I}_j - E_D f(x) (p(x))^2 \mathbb{I}_j \rvert$, and there is no additive $\Delta$? $(\mathbb{I}_j := \mathbb{I}(j\Delta \leq p(x)^2 < (j+1)\Delta)$ 4. Line 922: Can authors clarify why there exists such $\tau_i$? it seems that if I increase $\tau$, then the value on the left decreases to 0, while the value on the right stays positive? I will consider changing my score when authors address these questions. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: There are no ethical limitations in this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. We will carefully go through our proofs in the appendix to fix all typos for future revisions. Regarding the specific questions of the reviewer: **Question 1.** You are correct that Equation (B.2) should have the square of Frobenius norm on the left side. Thank you for pointing out that typo. This will in turn slightly change the bound of the equation between lines 677 and 678, where we will instead use the fact that $\mathbb{E}_{S_i}[||I-M^{-1/2}M_iM^{-1/2}||_2] \le \sqrt{ \mathbb{E} _{S_i}[||I-M^{-1/2}M_iM^{-1/2}||_F^2]} \le \frac{(dk)^{O(k)}}{N^{1/4}}$. The statement of line 678 is still true for some sufficiently large constant $C$. We now give the omitted details for Equation (B.2). We start with the second equality. We have $\mathbb{E}_ {x\sim D}[ r_{j}(x) r_{j'}(x) ] = e_{j}^{T} e_{j'}$, which is zero if $j=j’$ and 1 otherwise (by the equation starting directly after line 669). We also have $e_j^T M^{-1/2}=r_j^T$ and $M^{-1/2} e_{j’}= r_{j’}$ and therefore $e_j^T M^{-1/2} M_i M^{-1/2} e_{j’}= r_j^T M_i r_{j’}$ which equals to $\mathbb{E}_ {x \sim S_i}[r_{j}(x) r_{j'}(x)]$. Now we explain the third equality, after adding $\mathbb{E}_ {S_i}$ (the expectation over the random selection of the elements of $S_i$) in the beginning of the second line, which is missing due to a typo. Since the set $S_i$ is composed from i.i.d. samples from $D$, the quantity $\mathbb{E}_ {S_i}(\mathbb{E}_ {x \sim D}[r_{j}(x) r_{j'}(x)]-\mathbb{E}_ {x \sim S_i}[r_{j}(x) r_{j'}(x)])^2$ equals to the variance of the random variable $\mathbb{E}_ {x \sim S_i}[r_{j}(x) r_{j'}(x)] = \frac{1}{|S_i|}\sum_{x\in S_i}r_{j}(x) r_{j'}(x)$ (the randomness is over the choice of $S_i$). Since the choice of elements of $S_i$ is i.i.d. from $D$, we can use the fact that the variance of a sum of i.i.d. variables equals to the sum of their variances. Overall, we have that the variance of $\mathbb{E}_ {x \sim S_i}[r_{j}(x) r_{j'}(x)]$ equals to $\frac{1}{|S_i|}\mathrm{Var}_ {x \sim D}[r_{j}(x) r_{j'}(x)]$. Finally, we substitute $|S_i|=\sqrt{N}$. **Question 2.** Even though the polynomial $r_j r_{j’}$ is of degree $2k$, it has the extra property of being a product of two degree-k polynomials. If $(r_j(x) r_{j’}(x))^2>B^2$ it has to either be the case that $(r_j(x))^2>B$ or $(r_{j’}(x))^2>B$. The probability of either of those events can be bounded by referring to the definition of $k$-tameness. **Question 3.** We confirm that the additive term $\Delta$ is indeed not necessary in the second line of Equation (B.4). We will correct this typo in the revision. We however note that the third line of Equation (B.4) should still have an additive term of $2\Delta$ (replacing the current value $\Delta$ with the value $2\Delta$ entails changing $\Delta$ to $\Delta/2$ in the line after line 701 and does not change the conclusion on line 702). The third line of Equation (B.4) follows from the second line of Equation (B.4) (with $\Delta$ removed) as follows. By triangle inequality, the second line of Equation (B.4) can be upper-bounded by a sum of terms if we let $u_j$ denote $\mathbb{E}_ {x \sim S} [f(x) \mathbf{1}_ {j\Delta \leq (p(x))^2 < (j+1)\Delta } ]$ and $v_j$ denote $\mathbb{E}_ {x \sim D’} [f(x) \mathbf{1}_ {j\Delta \leq (p(x))^2 < (j+1)\Delta } ]$, then by the triangle inequality we see that the second line of Equation (B.4) is at most $\sum_{j=0}^{B/\Delta}( (j \Delta) |u_j - v_j| + \Delta u_j + \Delta v_j)$. We also observe that $j \Delta \leq B$, and $\sum_{j=0}^{B/\Delta} u_j \leq 1$ and $\sum_{j=0}^{B/\Delta} v_j \leq 1$ which bounds the whole expression by $2\Delta+B\sum_{j=0}^{B/\Delta}( |u_j - v_j|).$ **Question 4.** Thank you for pointing this out, we added a short claim that proves that there indeed exists a value of $\tau_i$ satisfying the condition, assuming certain high-probability events take place. The proof is based on a simple averaging argument. In particular, $\tau_i$ is defined as the threshold such that the current set $S_i ^{\text{filtered}}$ contains unreasonably many points $x$ such that $(p_i(x))^2 >\tau_i$. If such a threshold $\tau_i$ did not exist, then $S_i ^{\text{filtered}}$ would not satisfy line 919 (which is a contradiction). More formally, for the sake of contradiction, suppose that for every $\tau \ge 0$ it is the case that $$ \frac{1}{N} | \{ x \in S_i^ {\text{filtered}} : (p_ i(x))^2 > \tau \} | \le \frac{10}{\alpha} ( \mathbb{P}_ {x \sim S_D} [B_0 \ge (p_ i(x))^2 > \tau] + \Delta_0 ). \tag{1} $$ Since every element $x$ of $S_i^ {\text{filtered}}$ satisfies $(p_i(x))^2 \le B_0$ we have $$ \frac{1}{N} \sum_{x \in S_i^{\text{filtered}}} (p_i(x))^2 = \int_{\tau = 0}^{B_0} \frac{1}{N} | \{ x \in S_i^{\text{filtered}} : (p_i(x))^2 > \tau \} | d\tau. \tag{2} $$ Combining Equations (1) and (2) gives $$ \frac{1}{N} \sum_{x \in S_i^ {\text{filtered}}} (p_i(x))^2 \le \frac{10}{\alpha} \left( \Delta_0 B_0 + \int_{0}^{B_0} \mathbb{P}_ {x \sim S_D} [B_0 \ge (p_i(x))^2 > \tau] d\tau \right), $$ which in turn is bounded by $\frac{10}{\alpha} ( \Delta_0 B_0 + \mathbb{E}_ {x \sim S_D} [ (p_i(x))^2 \cdot 1_{x \le B_0} ] )$. Additionally, since $S_D$ is assumed to satisfy property (3) in Claim 1, and $\hat{M}$ is assumed to satisfy Equation (E.1) (see line 929), we have $\mathbb{E}_ {x \sim S_D} \left[ (p_i(x))^2 \cdot 1_ {(p_i(x))^2 \le B_0} \right] \le 2\mathbb{E}_ {x \sim D}[(p_i(x))^2] \le \frac{11}{5} (p_i)^T \hat{M} (p_i) \le \frac{11}{5} \le 5$. Overall, we have bounded $\frac{1}{N} \sum_{x \in S_i^ {\text{filtered}}} (p_i(x))^2$ by $\frac{10}{\alpha} \left( \Delta_0 B_0 + 5 \right)$, which contradicts the premise that $\frac{1}{N} \sum_{x \in S_i^ {\text{filtered}}} (p_i(x))^2 > \frac{50}{\alpha} (1 + \Delta_0 B_0)$, finishing the proof. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed answers to my and other reviewers questions, and I will increase my score.
Summary: This paper studies two fundamental learning setups, namely PQ learning and TDS learning (Testable Learning with distribution shift), both motivated by covariate shift. In PQ learning, the algorithm is given labeled samples from $\mathcal D^{\text{train}}$ over $\mathbb R^d \times \{0, 1\}$ with marginal distribution $D$, and unlabeled samples from some test distribution $\mathcal D^{\text{test}}$ (also over $\mathbb R^d \times \{0, 1\}$). Assume that there is some common hypothesis $h$ that achieves a small error $\lambda$ under both $\mathcal D^{\text{train}}$ and $\mathcal D^{\text{test}}$, the goal of the learning algorithm is to output a selector $g: \mathbb R^d \mapsto \{0, 1\}$ and a classifier $\hat h$ such that (i) $\hat h$ achieves a small error with respect to $\mathcal D^{\text{test}}$ restricted to the acceptance region of $g$, and (ii) most samples from $D$ will not be rejected by $g$. The main contribution is an algorithm that works when (i) the marginal $D$ of $\mathcal D^{\text{train}}$ satisfies some polynomial concentration properties (including isotropic log-concave) and (ii) the hypothesis class admits $L_2$ sandwiching polynomial under $D$. In particular, it achieves a rejection rate of $\eta$ and an accuracy guarantee of $O( \lambda / \eta )$. The bounds are optimal when one sets $\eta = \sqrt{\lambda}$ to balance the rejection rate and accuracy. In tolerant TDS learning, the algorithm similarly has labeled sample access to $D^{\text{train}}$ and unlabeled sample access to $D^{\text{test}}$. In addition, the algorithm is allowed to reject $\mathcal D^{\text{test}}$ when it finds that the marginals of $D^{\text{train}}$ and $\mathcal D^{\text{test}}$ are at least $\theta$-far from each other in TV distance. The name ``tolerance'' follows exactly from the constraint that the algorithm is not allowed to reject when $D^{\text{train}}$ and $\mathcal D^{\text{test}}$ are close but not identical to each other. Their algorithm works in a setup similar to the PQ learning case and achieves a final error of $O(\lambda) + 2 \theta + \epsilon$, where $\lambda$ is the optimal error of some common hypothesis under $\mathcal D^{\text{train}}$ and $\mathcal D^{\text{test}}$ (the sum of the two errors), $\theta$ is the tolerance in TV distance, and $\epsilon$ is the error parameter. The core technique is a spectral based outlier removal procedure that resembles that from [DKS18] that performs filtering on the dataset until the $L_2$ norm of any polynomial under the empirical distribution becomes at most a constant factor of its $L_2$ norm under the reference distribution (the marginal of $\mathcal D^{\text{train}}$). However, the analysis in the current work is tighter, and in particular exhibits non-trivial guarantees even if $\mathcal D^{\text{test}}$ is very far from $\mathcal D^{\text{train}}$ in total variation distance, which is crucial in the technique's application in PQ learning. After ensuring that the spectral of the dataset is appropriately bounded, the algorithm runs polynomial regression, and its guarantees mainly follow from the existence of $L_2$ sandwiching polynomials. Strengths: The application of spectral outlier removal in the context of distribution shift is natural and turns out to b quite powerful. The most surprising part is that the procedure works even when the TV distance between the test and the training distributions is more than $1/2$. A naive application of spectral outlier removal will not work in this case as the filter may end up removing all points from the dataset (since the standard guarantee only says that the filter remove more "bad" points than "good" points). The analysis of this work circumvents the issue, and characterizes a tradeoff between the rejection rate $\alpha$ and the final bound on the $L_2$ norm of polynomials. This makes PQ learning possible even when the TV distance between $\mathcal D^{\text{train}}$ and $\mathcal D^{\text{test}}$ approaches $1$ (albeit with a cost of having larger learning error in the end). Weaknesses: A recent work [CKKSV24] shows that non-tolerant TDS learning can also be accomplished when the hypothesis classes has $L_1$ sandwiching polynomials. This makes TDS learning of some important hypothesis classes such as $AC_0$ circuits and quadratic threshold functions possible. However, the technique of spectral outlier removal in this work seems only applicable when one has $L_2$ sandwiching polynomial as otherwise one cannot easily replace moment-matching with spectral boundedness. Technical Quality: 3 Clarity: 3 Questions for Authors: In Definition 4.5, should it be "from some distribution $\mathcal D^{\text{train}}$" instead of "from some distribution $\mathcal D$"? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Yes, they have. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and for appreciating our work. In Definition 4.5, the distribution $\mathcal{D}$ represents some distribution over the features, which is unlabeled. We instead typically use $\mathcal{D}^{\mathrm{train}}$ to denote the labeled training distribution (see lines 136–137 and also Definition 4.1).
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This work considers learning under distribution shift. Here, a learner receives i.i.d labeled data from $D_{train}$, along with i.i.d unlabled data from $D_{test}$. It then builds a classifier with the goal of achieving high accuracy over $D_{test}$. Under arbitrary conditions, this task is impossible, so in the PQ-learning framework the classifier is allowed to abstain from prediction for points with a frequency close to the total-variation distance between $D_{train}$ and $D_{test}$. This work also considers the TDS setting in which the learner is allowed to completely abstain from prediction if it believes its inputs are drawn from a different data distribution. The main technical idea of this work is an outlier detection scheme, which (efficiently) computes an outlier detection function $g$ meant to distinguish the points in the training sample from $D_{test}$ that are "far" from the support of $D_{train}$. Their outlier procedure works by iteratively utilizing a quadratic program to find a polynomial of degree k that strongly distinguishes an outlier set. They then remove points until the outlier set only consists of points where the polynomial solution evaluates to very large amounts. The overall idea here is that similar distributions (i.e. if $D_{train} = D_{test}$) will lead to similar polynomial evaluations over them, and consequently the existence of a "separating" polynomial serves as a means to detecting where the distributions differ. This idea is reflected in their first result, which shows that the output of this scheme results in a funtion $g$ such that upon filtering with $g$, polynomial evaluations over the test and training distribution must be bounded within a factor of each other (based on tolerance parameters). They additionally bound the runtime of their algorithm. The only requirement for this theorem to hold is that the training distribution must satisfy regularity conditions based on the degree of polynomials being used, and they subsequently show that these conditions are met by isotropic log-concave distributions. Utilizing this technical idea, this work proceeds by giving PQ-learning algorithm for half-spaces over Gaussian training distributions. Despite the limited scope of this case, this work neverthless provides the first dimension-efficient solution to this problem. Then, this work continues with a more general result for PQ-learning, which, under a condition of "reasonableness" for a pair $(D, F)$, gives an efficient learning algorithm. Under the same condition, this work concludes by giving results for success in the related TDS setting. Strengths: This paper offers a solution to well-known theoretical problem that achieves the best known bounds. I particularly like that their algorithms are computationally efficient (in addition to enjoying the typical performance guarantees). Their technical idea for outlier detection (which forms the core of the paper) is relatively well explained and seems to me an innovative way to approach outlier detection. Weaknesses: The latter half of the paper is fairly burdened with algebra and consequently a bit difficult to follow. I would have appreciated a greater focus on intuition with more of the technical details being left to the appendix. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. Could you give some intuition for why you began your PQ-results for Gaussian distributions? The outlier-detection procedure works for a more general set of distributions, so it would be interesting if you could spell out the additional properties of Gaussians that make your first result possible. 2. Could you similarly give more intuition about the $(F, D)$-reasonableness condition? The definition feels rather extensive (given 3 conditions in it) and i consequently feel it could merit a longer discussion. As it is currently written, it feels a bit convoluted to me. Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the anonymous reviewer for their constructive feedback and suggestions. **Gaussian assumption:** The reviewer is correct that the outlier removal procedure works under weaker assumptions than Gaussianity. In particular, it will work for any tame distribution (see lines 155–158 for a definition). However, in order to obtain our PQ learning results, we also make use of the existence of low degree $\mathcal{L}_2$-sandwiching approximators. The existence of such approximators has been proven in prior work for the Gaussian distribution as well as for the uniform distribution over the hypercube. Nonetheless, we believe that simple classes like halfspaces and halfspace intersections admit low-degree sandwiching approximators with respect to other distributions as well and establishing appropriate bounds is an interesting direction for future work. The important relevant properties are, in general, both concentration and the anti-concentration of the Gaussian distribution, which are, for example, also satisfied by strongly log-concave distributions. **Reasonable pairs:** Definition 4.5 indeed consists of 3 properties. The first one is the existence of sandwiching approximators, which is known to be important for learning in the presence of distribution shift by prior work on TDS learning [KSV24b]. The second one is the tameness condition, which is important for the outlier removal procedure and was introduced in the work of [DKS18] for similar purposes. The third one ensures generalization for polynomial regression. We will add an appropriate discussion in future revisions. *[DKS18] Ilias Diakonikolas, Daniel M Kane, and Alistair Stewart. Learning geometric concepts with nasty noise. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1061–1073, 2018.* *[KSV24b] Adam R Klivans, Konstantinos Stavropoulos, and Arsen Vasilyan. Testable learning with distribution shift. 37th Annual Conference on Learning Theory, 2024.* --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your response. Overall, I'll maintain my current score but increase my confidence.
null
null
null
null
null
null
Robo-Instruct: Simulator-Augmented Instruction Alignment For Finetuning CodeLLMs
Reject
Summary: The paper introduces ROBO-INSTRUCT, a novel framework designed to generate synthetic training data for fine-tuning small language models to create domain-specific robot programs. The framework features two main components: ROBOSIM, an algorithm that validates programs using angelic execution, and INSTALIGN, which aligns instructions with the generated programs. The key contributions of this work include the development of ROBO-INSTRUCT to enhance the code generation performance of small open-weight language models for domain-specific robot programs. This framework introduces ROBOSIM, which features a dynamic world synthesis and evaluation process for generating relevant world states and performing automated code checks for diverse tasks. Additionally, it includes INSTALIGN, a procedure that refines instruction-code pairs to improve alignment between instructions and the code generated by SELF-INSTRUCT. By fine-tuning the Codellama-Python-7B model using ROBO-INSTRUCT, the model significantly outperforms several other open-source and most proprietary models. Strengths: - The paper demonstrates strong clarity and organization, making complex concepts accessible to readers. Each section flows logically, and technical details are explained effectively, ensuring that the methodology and findings are easy to follow. - The paper thoroughly reviews and incorporates current literature. - The paper meticulously validates all its claims through experimental results and detailed analysis. The effectiveness of ROBO-INSTRUCT, ROBOSIM, and INSTALIGN is demonstrated convincingly through empirical data and comparisons with existing models and benchmarks. This empirical validation ensures that the contributions are not just theoretical but substantiated with practical evidence. Weaknesses: - While ROBO-INSTRUCT offers significant advancements for fine-tuning language models in robot programming, there are some weaknesses to consider. Firstly, it heavily relies on SELF-INSTRUCT for generating initial programs, potentially introducing biases from the base model's training data. This could limit the diversity and quality of the generated programs. - Moreover, while ROBO-INSTRUCT shows promising results on benchmarks like ROBOEVAL, its application to real-world robot programming tasks requires thorough evaluation and validation. Real-world robot environments often present unpredictable challenges that benchmark datasets may not fully capture, necessitating further testing to assess the framework's robustness and generalizability in practical applications. Technical Quality: 3 Clarity: 3 Questions for Authors: What factors do you think contribute to GPT-4's superior performance compared to ROBO-INSTRUCT, despite your method showing competitive results against other models like GPT-3.5-Turbo and Gemini-1.0-Pro Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations of the work are clearly stated Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We would like to offer further clarification beyond the general response above to address the specific concerns and points raised in the review. ***Weakness:*** While ROBO-INSTRUCT offers significant advancements for fine-tuning language models in robot programming, there are some weaknesses to consider. Firstly, it heavily relies on SELF-INSTRUCT for generating initial programs, potentially introducing biases from the base model's training data. This could limit the diversity and quality of the generated programs. ***Response:*** As discussed in Section 5 (Conclusion, Limitation, and Future Works), we acknowledge that our proposed method, which builds on Self-Instruct, could potentially be influenced by biases in the base model's training data. However, we would like to emphasize that our method is orthogonal to many other popular approaches used in Self-Instruct. Therefore, it is possible to apply Robo-Instruct alongside these methods to mitigate these concerns. For instance, the OSS-Instruct method by Magicoder (Wei et al., 2023) and the Evol-Instruct method by Wizardcoder (Luo et al., 2023) both aim to enhance the quality and diversity of data generated from Self-Instruct. Magicoder has demonstrated that combining these orthogonal methods (OSS-Instruct + Evol-Instruct) can lead to better outcomes. Consequently, we believe that using Robo-Instruct in conjunction with these methods could enhance the performance of fine-tuned models in domain-specific applications. Investigating the integration of these orthogonal methods for generating domain-specific datasets would be a valuable direction for future work. ***Weakness:*** Moreover, while ROBO-INSTRUCT shows promising results on benchmarks like ROBOEVAL, its application to real-world robot programming tasks requires thorough evaluation and validation. Real-world robot environments often present unpredictable challenges that benchmark datasets may not fully capture, necessitating further testing to assess the framework's robustness and generalizability in practical applications. ***Response:*** We have deployed the Robo-Instruct fine-tuned model on a real robot using edge computing to demonstrate its real-world practicality, as illustrated in the PDF in the general response. This information will be included in the appendix of our revised paper. ***Question:*** What factors do you think contribute to GPT-4's superior performance compared to ROBO-INSTRUCT, despite your method showing competitive results against other models like GPT-3.5-Turbo and Gemini-1.0-Pro ***Response:*** It is possible that GPT-4 has a significantly larger number of model parameters and is trained with more resources. As a result, GPT-4 has much longer inference times. In contrast, our fine-tuned model can perform inference five times faster, as illustrated in the PDF in the general response. --- Rebuttal Comment 1.1: Title: Dear reviewer, please read and respond to authors' rebuttal. Comment: This paper has very diverse reviews and it would benefit a lot to start a discussion to clarify any confusing points. Thanks! Your AC.
Summary: The paper tries to improve the performance of small open-sourced LLMs to generate code that can run successfully on a service robot simulator to solve tasks. The idea is to use another small model to generate program data using SELF-INSTRUCT and fine-tune a 7B model for the robotics domain. The authors note that data generated by SELF-INSTRUCT may have good diversity but lack correctness. To this end, they build a simulator RoboSIM that takes the programs generated by SELF-INSTRUCT and verifies the correctness of the execution in addition to syntax errors given a predefined robot API. They further modify the instructions to align with the verified programs better. Overall they show the 7B LLM fine-tuned on this clean data can outperform a GPT3.5-Turbo in the robotics domain. Strengths: The paper is well-written and motivated. Figures are easy to understand and helpful in conveying high-level ideas. The idea to build a pseudo-simulator that tracks world states without running the generated programs through an actual simulator is novel. It is amazing that synthetically generated data filtered by simple heuristics such as requiring programs to pass the world state tracking RoboSim is sufficient to help improve the performance of an actual simulator. Weaknesses: While the paper is well-written for the scope it sets for itself, I am not sure if the contribution is significant enough. There are many works using LLM to generate data and fine-tune domain-specific models, so the idea behind this paper is not super novel. The performance gain is also limited by the rather heuristic method considering the gap between the best model presented by the paper and GPT-4 it sought out to beat. In fact, given the 17% performance gap, simple baselines could be using GPT-4 to generate the programs and fine-tuning small models or using GPT-4 as the critic to filter programs. These simpler heuristics may yield better results and prove the Robo-Instruct method (which is also rather heuristic) proposed by the paper unnecessary. For reference consider [1] Improving Small Language Models on PubMedQA via Generative Data Augmentation Therefore, I am not sure the contribution of this paper is all that significant. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you explain why InstAlign would improve pass@1 success rate by ~5% on top of RoboSIM+RU in table 2? What's the error bar/variance of these statistics? What is the intuition that InstAlign would help and could you give some examples? 2. One claim made early on in the paper is that Self-Instruct gives high-diversity data but lacks correctness, and a simulator can give correctness but lacks diversity. The paper is trying to get the best of both worlds. While the current experiment results show that Robo-Instruct does generate data with high correctness, we don't see evidence that Robo-Instruct preserves diversity. Can you show some evidence (through some metrics) on diversity? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Building RoboSim to track world states might work only for simple pick-n-place or task-planning problems. How to generalize this heuristic of tracking world states to more complex problems is unclear. Some analysis of the scope (suitable for what kind of problem class) of this approach is needed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We would like to offer further clarification beyond the general response above to address the specific concerns and points raised in the review. ***Weakness:*** While the paper is well-written for the scope it sets for itself, I am not sure if the contribution is significant enough. There are many works using LLM to generate data and fine-tune domain-specific models, so the idea behind this paper is not super novel. ***Response:*** While fine-tuning domain-specific models using techniques such as Self-Instruct is common, obtaining high-quality correct instruction-program pairs remains a challenging problem. Since custom-defined domain-specific APIs may not be present in the training corpus of existing LLMs, methods like Self-Instruct can generate incorrect training data, as shown in Appendix A.2.1. Table 1 in the paper also highlights the performance gap when using the Self-Instruct method. Verifying the correctness of these generated programs using a simulator is a natural approach. However, creating an appropriate simulation environment for each program can be challenging, often requiring manual specifications of entities, types, and states. In this work, we address this challenge by introducing a novel approach that automates the generation of simulation environments for each generated program. As outlined in the general response, this method is versatile, with the potential to be extensible beyond robotics, capable of handling arbitrary open-world tasks without the need for manual coding of entities, types, or states. ***Weakness:*** The performance gain is also limited by the rather heuristic method considering the gap between the best model presented by the paper and GPT-4 it sought out to beat. In fact, given the 17% performance gap, simple baselines could be using GPT-4 to generate the programs and fine-tuning small models or using GPT-4 as the critic to filter programs. These simpler heuristics may yield better results and prove the Robo-Instruct method (which is also rather heuristic) proposed by the paper unnecessary. For reference consider [1] Improving Small Language Models on PubMedQA via Generative Data Augmentation. Therefore, I am not sure the contribution of this paper is all that significant. ***Response:*** Our preliminary results show that a simple distillation of GPT-4 is not effective for fine-tuning domain-specific models, and the best pass@1 score was around 64%, which is lower than the score reported in Robo-Instruct. We will add this information to the appendix of the revised paper. More importantly, we chose to generate data from an open-weight model instead of proprietary models like GPT-4 because open-weight models are often preferred in practice. They are cost-free, can be deployed on local servers for greater flexibility, and help address privacy concerns when dealing with sensitive information related to custom domain-specific APIs. ***Question:*** Could you explain why InstAlign would improve pass@1 success rate by ~5% on top of RoboSIM+RU in table 2? What's the error bar/variance of these statistics? What is the intuition that InstAlign would help and could you give some examples? ***Response:*** The intuition behind why InstAlign would improve pass@1 accuracy is twofold: 1. ***Specificity of Aligned Instruction:*** The aligned instruction is more specific to the generated program, providing clearer guidance on the intended actions. Instead of offering generic instructions, it can communicate what the program aims to accomplish more precisely. For example: ``` def task_program(): for person in ["Arjun", "Alice", "Eve"]: response = ask(person, “Do you like the weather today?”, [“yes”, “no”]) ``` &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *Original instruction:* Ask ***everyone*** in the room if they like the weather today &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *Aligned Instruction:* Ask ***Arjun, Alice, and Eve*** if they like the weather today 2. ***Consistency with Program Actions:*** Since the program is generated stochastically, it may modify actions that are misaligned with the original instruction. Aligned instructions can correct these discrepancies, as seen in this example: ``` def task_program(): if not is_in_room("pen"): pick("pen") ``` &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *Original instruction:* ***Ask*** if there is a pen here. If so, pick it up. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *Aligned Instruction:* ***Check*** if there is a pen here. If so, pick it up. Aligning the instruction with the program may reduce ambiguity during fine-tuning, leading to improved performance by minimizing noise and enhancing the clarity of the tasks being performed. ***Question:*** One claim made early on in the paper is that Self-Instruct gives high-diversity data but lacks correctness, and a simulator can give correctness but lacks diversity. The paper is trying to get the best of both worlds. While the current experiment results show that Robo-Instruct does generate data with high correctness, we don't see evidence that Robo-Instruct preserves diversity. Can you show some evidence (through some metrics) on diversity? ***Response:*** We analyze the generated dataset in Appendix A.3. Our findings indicate that Robo-Instruct is capable of producing data with a distribution similar to that of Self-Instruct. Therefore, Robo-Instruct maintains the diversity of Self-Instruct. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for the detailed response, especially clarifying the central contribution of "automatically populate simulation environments for use in verification" via Angelic Execution. A follow-up question to your example in the general rebuttal - "RoboSim will infer that apple is an object from its use with pick_up in the first line and recognize that the go_to function, which requires a location type, is inappropriately called on an object in the second line." How does your method differentiate between that (1) apple is an object that affords pick_up and go_to(apple) is erroneous and (2) apple is a location that affords go_to and pick_up(apple) is erroneous? Overall, the rebuttal improves my assessment of the paper contribution. However, I am still a bit skeptical whether the proposed method can scale to complex tasks that involve 20+ step executions. Additionally, best pass@1 score was around 64% for distilling GPT-4, not significantly lower than 68% (Robo-Instruct). Despite the arguments that GPT-4 is not cost-free or open-weight, I still see it as a simple and effective way of solving the same problem. Hence I am raising my score but not by a whole lot. --- Reply to Comment 1.1.1: Comment: Thank you for the improved score. We would like to offer more detailed responses to the questions raised in the comments. **Question:** RoboSim will infer that apple is an object from its use with pick_up in the first line and recognize that the go_to function, which requires a location type, is inappropriately called on an object in the second line." How does your method differentiate between that (1) apple is an object that affords pick_up and go_to(apple) is erroneous and (2) apple is a location that affords go_to and pick_up(apple) is erroneous? **Anwer:** RoboSim, infers types in the order that they are referenced. The key property it checks for is consistency --- irrespective of the ordering. Hence, if `go_to("apple")` is called before `pick_up("apple")`, it would guess that `apple` is of type `location`, which would be inconsistent with the subsequent call `pick_up("apple")`. If there is only a single-type use of an object, such as if only `go_to("apple")` is called, RoboSim would guess that `apple` is of type `location`, and it would not throw any errors. As in Angelic Execution[1], RoboSim maintains an ***optimistic*** assumption of types being valid, unless they are identified to be inconsistent. In addition, RoboSim checks for violations of domain-specific constraints (e.g., trying to pick up a second object while already holding a first). **Question:** However, I am still a bit skeptical whether the proposed method can scale to complex tasks that involve 20+ step executions. **Anwer:** Our proposed method, RoboSim, automatically synthesizes simulation environments for program verification. Conceptually, it supports complex tasks with arbitrary execution steps. As the program is executed during verification, additional concepts, such as entities, types, and states, are introduced. In fact, our method benefits from tasks with more execution steps, as this provides more information about the generated program, increasing the likelihood of identifying and rejecting failing programs. Additionally, ***while solving long-horizon tasks was not an originally claimed contribution of this paper, we were intrigued by the reviewer’s inquiry and conducted a small qualitative experiment to evaluate how well the base model, Self-Instruct, Robo-Instruct fine-tuned models, and GPT-4 perform on long-horizon tasks.*** We create two instructions: 1. Let's play a game: Double and give it to the next person. Start with 1 dollar. Go to rooms A, B, C, D, E, F, and G. If you see someone, tell them how much money you have. Then ask if they would like to take the money now or double the amount and give it to the next person. If they choose to take it, the game is over, and you should come back to me. Otherwise, double your money and continue. If, in the end, no one takes the money, tell me how much you still have. 2. Go to my office and check if I have a table, a chair, and a monitor there. If any of these items are missing, go to Jason's office and see if he is there. If he is, ask him if I can borrow the missing items. If he agrees, pick up each missing item and bring it to my office. If Jason is not in his office or he says no, come back and tell me the reason. We generated the program using each model with a temperature setting of 0 and found that ***it is possible for our Robo-Instruct fine-tuned model to produce correct programs for these long-horizon tasks***, while both the base model and the Self-Instruct fine-tuned model fail. Additionally, GPT-4 made an error on the second instruction. Due to space limitations, we will share the program generated by our Robo-Instruct fine-tuned model along with GPT-4's results in a separate comment below. ***Question:*** Despite the arguments that GPT-4 is not cost-free or open-weight, I still see it as a simple and effective way of solving the same problem. ***Anwer:*** We acknowledge that GPT-4 can be a viable option for generating datasets for fine-tuning domain-specific applications. However, we also emphasize that in many scenarios, ***this option is not feasible due to privacy concerns***, particularly in sectors such as financial institutions [2] and even in tech industries [3][4], despite the growing interest in using these technologies [5]. Our work offers a potential solution for these domains to fine-tune their internal models while addressing these privacy concerns. Citations: 1. Broy & Wirsing, On the Algebraic Specification of Nondeterministic Programming Languages, 1981 2. Landymore, Frank. “Giant Bank JP Morgan Bans Chatgpt Use among Employees.” Futurism, 2023 3. Gurman, Mark. “Samsung Bans Chatgpt, Google Bard, Other Generative AI Use by Staff after Leak.” Bloomberg, 2023 4. Richard, Isaiah. “Elon Musk: IOS 18 a ‘security Violation’ for CHATGPT AI Integration” Tech Times, 2024 5. Harvel N, et al., “Can LLMs Answer Investment Banking Questions?” AAAI 2024 Spring Symposium Series, 2024 --- Reply to Comment 1.1.2: Title: Follow-up Generated Programs Comment: ***Instruction 1*** Ours: ```python def task_program(): start_loc = get_current_location() list_of_rooms = ["A", "B", "C", "D", "E", "F", "G"] money = 1 for room in list_of_rooms: go_to(room) if is_in_room("person"): say("I have $" + str(money)) response = ask("", "Would you like to take the money now or double the amount and give it to the next person?", ["Take now", "Double and give"]) if response == "Take now": go_to(start_loc) say("The game is over, I gave the money to " + room) return else: money *= 2 go_to(start_loc) say("No one took the money, I still have $" + str(money)) ``` GPT-4 ```python def task_program(): start_loc = get_current_location() rooms = ["A", "B", "C", "D", "E", "F", "G"] money = 1 for room in rooms: go_to(room) if is_in_room("person"): say("I have " + str(money) + " dollars") response = ask("", "Would you like to take the money now or double the amount and give it to the next person?", ["Take it", "Double it"]) if response == "Take it": say("Game over. The person in room " + room + " took the money.") go_to(start_loc) return else: money *= 2 go_to(start_loc) say("No one took the money. I still have " + str(money) + " dollars.") ``` ***Instruction 2*** Ours ```python def task_program(): start_loc = get_current_location() go_to("my office") table_found = is_in_room("table") chair_found = is_in_room("chair") monitor_found = is_in_room("monitor") if not table_found: go_to("Jason's office") if is_in_room("Jason"): response = ask("Jason", "Can I borrow the table from you?", ["Yes", "No"]) if response == "Yes": go_to("Jason's office") pick("table") go_to("my office") place("table") else: go_to(start_loc) say("Jason said no") else: go_to(start_loc) say("Jason is not in his office") if not chair_found: go_to("Jason's office") if is_in_room("Jason"): response = ask("Jason", "Can I borrow the chair from you?", ["Yes", "No"]) if response == "Yes": go_to("Jason's office") pick("chair") go_to("my office") place("chair") else: go_to(start_loc) say("Jason said no") else: go_to(start_loc) say("Jason is not in his office") if not monitor_found: go_to("Jason's office") if is_in_room("Jason"): response = ask("Jason", "Can I borrow the monitor from you?", ["Yes", "No"]) if response == "Yes": go_to("Jason's office") pick("monitor") go_to("my office") place("monitor") else: go_to(start_loc) say("Jason said no") else: go_to(start_loc) say("Jason is not in his office") go_to(start_loc) say("task is completed") ``` GPT-4 (***incorrect***) ```python def task_program(): start_loc = get_current_location() go_to("my office") missing_items = [] for item in ["table", "chair", "monitor"]: if not is_in_room(item): missing_items.append(item) if len(missing_items) > 0: go_to("Jason's office") if is_in_room("Jason"): for item in missing_items: response = ask("Jason", "Can I borrow your " + item + "?", ["Yes", "No"]) if response == "Yes": pick(item) <= BUG: did not return to Jason's office after going back to my office go_to("my office") <= BUG: did not return to Jason's office after going back to my office place(item) <= BUG: did not return to Jason's office after going back to my office else: go_to(start_loc) say("Jason did not allow me to borrow his " + item) return else: go_to(start_loc) say("Jason is not in his office") return go_to(start_loc) say("Task is completed") ```
Summary: This paper introduces a framework for generating paired instruction and robot code for further fine-tuning LLMs for robot-specific tasks. A symbolic simulator is used to check the correctness of the generated code and an LLM is prompted with chain-of-thought reasoning to align generated instruction. The resulted dataset was used to fine-tune a robot specific LLM and later tested on benchmark tasks. Strengths: This paper tackles two challenges of automated data generation for robot code synthesis, one is to check the correctness of the code by grounding it to logical state of objects and the other is to align the generated the instructions to provided robot capabilities. By designing principled and general modules that tackle each of these problems, RoboInstruct is shown to generate useful data for finetuning general purpose LLM to robot specific code generation applications. Weaknesses: RoboSIM can only check for semantically meaningful steps of the code and may not catch lower-level error that requires spatial/geometric reasoning, or even reasoning about physics, including commands that take in numerical parameters e.g. move(0,0.2,0), rotate(0.75). This seem to limit the usefulness of RoboInstruct to certain types of robot APIs. Technical Quality: 3 Clarity: 4 Questions for Authors: Can the prompt of InstAlign be adapted to the instruction generation step? What information after the fact is being used by instAlign that cannot be useful at initial generation round? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We would like to offer further clarification beyond the general response above to address the specific concerns and points raised in the review. ***Weakness:*** RoboSIM can only check for semantically meaningful steps of the code and may not catch lower-level error that requires spatial/geometric reasoning, or even reasoning about physics, including commands that take in numerical parameters e.g. move(0,0.2,0), rotate(0.75). This seem to limit the usefulness of RoboInstruct to certain types of robot APIs. ***Response:*** RoboSim is designed to validate the correctness of programs given a particular domain --- defined by its API, types, and properties to check. We focus on task-level correctness as the domain of application, as it is of great broad interest as mentioned in the general response. However, as described in the general response, it is extensible to other domains, and we illustrate here how the domain can be easily expanded to check for lower-level actions if that is the desired level of verification. For example, consider a tabletop manipulation setting. A possible API function is `rotate(robot_gripper, radians)`. From the statement `rotate(“left_hand”, pi/2)`, RoboSim will infer that `left_hand` is an `entity` with the type of the robot gripper, and the state of `left_hand` is its current rotation. If there is a domain-specific constraint on the rotation of the robot gripper, such as the gripper can only rotate between $-\pi/6$ to $\pi/6$ radians, then this statement `rotate(“left_hand”, pi/2)` becomes invalid because no matter where the state rotation position of the robot gripper is, rotating $\pi/2$ radians will exceed the maximal allowable rotation range of the robot gripper: $\pi/2 > \pi/6 + \pi/6$. ***Question:*** Can the prompt of InstAlign be adapted to the instruction generation step? What information after the fact is being used by instAlign that cannot be useful at initial generation round? ***Response:*** InstAlign can be applied in the initial generation round, which can occur after the Self-Instruct phase and before sending the program to RoboSim. However, as shown in Table 2, where the results compare "+Reject Unsolvable (RU)" vs. "+INSTALIGN + RU", our experiment indicates that this adaptation is not effective. The key insight here is that InstAlign proposed in Robo-Instruct will align the instruction with the validated programs, whereas adapting it at the initial generation round, the generated program could be invalid, thus limiting its effectiveness. --- Rebuttal Comment 1.1: Title: Dear reviewer, please read and respond to authors' rebuttal. Comment: This paper has very diverse reviews and it would benefit a lot to start a discussion to clarify any confusing points. Thanks! Your AC. --- Rebuttal 2: Title: keeping my rating Comment: I have read other reviewer's feedback and would like to keep my current rating. - I believe "service mobile robots" is a broad enough scope and this work makes meaningful contribution by grounding code generation for robot-specific tasks with logic. The idea of this work can be broadly applied to other robotic applications. - I also find it unfair to diminish the contribution of this work if we expect future generations of VLM/GPTs will be better at code generation for robots. Embodied code synthesis faces its own challenges than training generic VLMs and this paper provides a framework for safeguarding code generated automatically. --- Rebuttal Comment 2.1: Comment: Thank you and we appreciate your recognition of this work.
Summary: This paper introduces ROBO-INSTRUCT, a novel framework designed to improve the code generation capabilities of smaller open-weight language models (LLMs) for domain-specific robotic tasks. ROBO-INSTRUCT leverages two key components: 1. ROBOSIM with DYNAMICEVAL: A task-agnostic simulator that dynamically synthesizes a consistent world state based on the robot's actions within the program. This allows ROBOSIM to identify execution errors and validate generated programs even for diverse and complex tasks. 2. INSTALIGN: An instruction-program alignment procedure that utilizes Chain-of-Thought reasoning to refine the generated instructions. This ensures that the instructions better reflect the intent of the generated robot program, improving alignment between the two. The paper evaluates ROBO-INSTRUCT by fine-tuning a Codellama-Python-7B model and testing its performance on ROBOEVAL, a benchmark for service mobile robots. The results demonstrate that the ROBO-INSTRUCT fine-tuned model significantly outperforms other open-weight models Strengths: Novel framework: Introduces ROBO-INSTRUCT, a unique approach to generating training data for fine-tuning smaller LLMs on domain-specific robot tasks. Dynamic world synthesis: ROBOSIM's ability to dynamically create relevant world states allows it to validate diverse programs generated by SELF-INSTRUCT, overcoming the limitations of traditional simulators. Instruction-program alignment: INSTALIGN effectively refines instructions to better reflect the program's intent, improving the quality of the training dataset. Strong empirical results: Demonstrates that ROBO-INSTRUCT significantly improves the performance of small open-weight LLMs, enabling them to surpass even some proprietary LLMs. Cost-effective and private: Provides a potential alternative to deploying proprietary LLMs for local robot deployment, offering cost-effectiveness and privacy benefits. Weaknesses: Limited novelty: The idea of using a sim/emulator to verify the generated program has already been explored in previous works such as Chain-of-code, which is not mentioned by this work. Limited scope: The paper focuses on a specific domain (service mobile robots), and it is unclear how well ROBO-INSTRUCT generalizes to other robot domains. Lack of real-world evaluation: The paper only evaluates ROBO-INSTRUCT on a synthetic benchmark. Real-world deployment and testing are required to further assess its practicality. Technical Quality: 3 Clarity: 3 Questions for Authors: Figure 1 is confusing as the authors put the overview of the proposed method, counter examples from previous methods, as well as the benchmark results all in a single figure. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We would like to provide further clarification beyond the general response above to address the specific concerns and points raised in the review. ***Weakness:*** Limited novelty: The idea of using a sim/emulator to verify the generated program has already been explored in previous works such as Chain-of-code, which is not mentioned in this work. ***Response:*** The general response explains the novelty of our approach, and we focus here on the relation between RoboInstruct and Chain-of-Code (CoC), which we will cite in the paper revision. CoC focuses on enhancing LLM's reasoning capabilities. It *selectively simulates the interpreter* by generating the expected output of certain lines of code, which the default interpreter could not execute. The simulation procedure involves an LLM to determine valid output values that will be saved into a variable in the program. For example, given a line of the program ``` answer += is_sarcastic(‘you don’t say’) ``` The LLM simulator is responsible for inferring the output of the function `is_sarcastic(string)`, and determines `you don’t say` is sarcastic and subsequently returns the value of 1. However, such methods are unsuitable for the problem this work addresses. For example, consider the API function pick_up, mentioned in the general response. This API function does not return a value; instead, it updates the state of the robot in the simulation environment to indicate that the robot is holding the object. Consequently, using an LLM as the interpreter cannot simulate the state of entities in the simulation environment. ***Weakness:*** Limited scope: The paper focuses on a specific domain (service mobile robots), and it is unclear how well ROBO-INSTRUCT generalizes to other robot domains. ***Response:*** We would like to highlight that our focus on general-purpose service mobile robots is a widely studied and popular area in the AI community as mentioned in the general response. In addition to the example of how RoboInstruct can be applied to other domains, we provide an additional example here of applying RoboInstruct to different types of robot tasks beyond service mobile robots. For example, consider a tabletop manipulation setting. A possible API function is `rotate(robot_gripper, radians)`. From the statement `rotate(“left_hand”, pi/2)`, RoboSim will infer that `left_hand` is an `entity` with the type of the robot gripper, and the state of `left_hand` is its current rotation. If there is a domain-specific constraint on the rotation of the robot gripper, such as the gripper can only rotate between $-\pi/6$ to $\pi/6$ radians, then this statement `rotate(“left_hand”, pi/2)` becomes invalid because no matter where the state rotation position of the robot gripper is, rotating $\pi/2$ radians will exceed the maximal allowable rotation range of the robot gripper: $\pi/2 > \pi/6 + \pi/6$. ***Weakness:*** Lack of real-world evaluation: The paper only evaluates ROBO-INSTRUCT on a synthetic benchmark. Real-world deployment and testing are required to further assess its practicality. ***Response:*** We have deployed the Robo-Instruct fine-tuned model on a real robot using edge computing to demonstrate its real-world practicality, as illustrated in the PDF in the general response. This information will be included in the appendix of our revised paper. ***Question:*** Figure 1 is confusing as the authors put the overview of the proposed method, counter examples from previous methods, as well as the benchmark results all in a single figure. ***Response:*** Thank you for the feedback. Our intention was to provide a quick summary to the reader to highlight the relation to previous approaches, our solution, and the high-level results. This can be presented with sub-figures to separate them out to make them clearer - we can make this change in the revision. --- Rebuttal Comment 1.1: Title: Dear reviewer, please read and respond to authors' rebuttal. Comment: This paper has very diverse reviews and it would benefit a lot to start a discussion to clarify any confusing points. Thanks! Your AC.
Rebuttal 1: Rebuttal: # General Response We appreciate the reviewers' careful consideration and positive feedback, as well as their constructive concerns. In this response, we address common points raised in the reviews, particularly the key contribution of RoboInstruct, as well as its applicability to other domains. The PDF includes a quantitative evaluation of our model's inference speed and a real robot demonstration. ## Key contributions of this paper: While previous studies have explored the use of simulators with ***pre-defined simulation environments*** for checking the correctness of LLM-generated output, as presented in Sec 2.1, the key contribution of this paper lies in a fundamentally new approach to ***automatically populate simulation environments*** for use in verification, which we summarize here. A simulation environment (represented as world states in the paper) relies on three concepts: - A list of ***entities*** to reason about, e.g., "apple", "kitchen" - The ***type*** of the entities, and hence their affordances, e.g., "apple" is an object, you can pick it up; "kitchen" is a location, you can go to it, and it contains objects. - The ***state*** of the entities in the world, e.g., the "apple" is in the "kitchen". RoboSim draws inspiration from Angelic Execution [1] in software engineering previously used to infer program properties given incomplete API specifications. It automatically populates a simulation environment for each program and checks the program for correctness against this inferred environment, as presented in Alg. 3. - First, RoboSim infers that it needs to reason about ***new entities*** when they appear in the program being checked. For instance, if a program includes the statement `pick_up("apple")`, RoboSim infers that `apple` is an entity to consider, even if it did not previously exist in the environment. - Second, RoboSim deduces the ***type*** of an entity based on the API call used to interact with it. In the above example, `apple` is an `object`, because the `pick_up` function is called -- you can only call `pick_up` on `object` types. The domain definition outlines API functions along with their type requirements; for example, `pick_up` requires an `object` type, while `go_to` requires a `location` type. This allows RoboSim to detect inconsistencies in program interactions with entities. For example, if a program contains: ``` pick_up("apple") go_to("apple") ``` &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RoboSim will infer that `apple` is an `object` from its use with `pick_up` in the first line and recognize that the `go_to` function, which requires a `location` type, is inappropriately called on an `object` in the second line. Thus, RoboSim would determine that the program fails due to this type mismatch. - Finally, the ***state*** of the entities in the world can also affect the correct execution of the program. An example is provided in Fig 2, and here we illustrate another simple case: ``` if not is_in_room("apple"): pick_up("apple") ``` It is obvious to humans that the program's logic is flawed because it tries to pick up the `apple` if it isn't in the room. However, ***how would the simulator know that this is the failing state?*** The solution to this problem in RoboSim is simple: it simulates ***all possible states of all entities discovered*** and checks that none of them result in erroneous execution of the program. In this example, the state of the discovered entity `apple` can either be present in the current room or not. If the state is such that the `apple` is not present, executing the statement `pick_up("apple")` will result in an error. Such checking would require the enumeration of many states that are exponential in the number of entities discovered. Our solution to this problem is to provide a bounded compute budget to randomly sample from this exponential space, as presented in Alg. 2. ## Applying RoboInstruct to other domains In response to reviewers A73B and QEDf’s concerns about the project’s scope, we would like to highlight that our focus on general-purpose service mobile robots is a widely studied and popular area in the AI community [2-6]. Moreover, the key concepts in RoboInstruct are applicable to other domains. For example, consider a broader application than robotics: code generation for an AI-powered personal digital assistant. This AI assistant could handle scheduling events using an API function like `schedule_on_calendar(event, start_time, duration)`. Given the instruction: "My schedule is free tomorrow morning. Please create two 1-hour timeslots for office hours for my robotics and deep learning class." The assistant could generate a program to create these timeslots: ``` schedule_on_calendar("robotics class office hour", “9:30 am”, “1 hr”) schedule_on_calendar("deep learning class office hour", “10:00 am”, “1 hr”) ``` In this example, the simulator needs to reason about the entities `robotics class office hour` and `deep learning class office hour`, which are categorized as `event` types. The `event` type indicates that these entities have associated timeslots. The state of these entities is defined by the time they occur: `robotics class office hour` is set for 9:30-10:30 am, and `deep learning class office hour` is set for 10:00-11:00 am. During evaluation, the simulator can identify a time conflict between these two office hours and thus determine that the generated program is invalid. ### References: 1. Broy & Wirsing, On the Algebraic Specification of Nondeterministic Programming Languages, 1981 2. Stark et al., Dobby: A Conversational Service Robot Driven by GPT-4, 2023 3. Li et al., Fine-Grained Task Planning for Service Robots, 2024 4. Wu et al., TidyBot, 2023 5. Wang et al., LLM-based Robot Task Planning with Exceptional Handling for General Purpose Service Robots, 2024 6. Liu et al., Ok-Robot, 2024 ## PDF attachment includes experiments on inference speed and deployment to real robots Pdf: /pdf/1084ec04d16bcadab927b0c23c8771534e52d1b1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
eXponential FAmily Dynamical Systems (XFADS): Large-scale nonlinear Gaussian state-space modeling
Accept (poster)
Summary: This work develops a deep state-space model (DSSM) for state inference in and system identification of non-linear dynamical systems. The objective function for learning the DSSM parameters is based on a smoothing formulation and thus the evidence lower bound (ELBO). The prior and likelihood parts are separately parametrized and joined in the natural parameter space, where the posterior natural parameters are simply the sum of prior and likelihood natural parameters. Furthermore, a low-rank approximation of the approximate likelihood (via encoder) are used to improve computational efficiency. Experiments are performed on synthetic data (pendulum and bouncing ball) as well as neuroscientific data (motor cortex readings of monkeys). The authors compare mostly against pretty old baselines from before 2016 (DVBF, DKF, SVAE). Strengths: - The paper is very well written and quite easy to follow for the most part (see weaknesses). - Hyperparameter, notation and algorithm details as well as derivations are given in the supplementary material, which is very helpful. Weaknesses: - The mentioned limitations of previous work's limitations are not convincing. Fast evaluation of the loss function is hardly the problem that previous DSSM approaches tried to tackle. The main challenges of the VAE-based approaches is that they often only learn to auto-encode and do not properly identify the correct dynamics [1]. Indeed, tackling this problem seems to be a goal of the present work as well, but it is not argued in the introduction. - The related work section also seemed to stop past 2016. There are multiple subsequent works that address and improve upon the problem of learning dynamics from high-dimensional data, see e.g. [2-5], all of which share several ideas with this work. Soundness: I have difficulties with the technical soundness of the paper, especially about the following: - Eq. (6) is slightly incorrect: The expectation is taken only over z_{t-1}, however, the terms inside the expectation involve z_{t-1} and z_{t}. A correct smoothing-based objective involves pairs two-slice marginals, see e.g. [6]. Unfortunately, the paper is based on this equation and it does not seem to be just a typo as subsequent equation (e.g. (19)) have the same error. - What exactly is the pseudo-observation \tilde{y}? What values does it take compared to an actual y? Previous work [2,4,5] use the term pseudo-observation to refer to an additional latent variable that encodes information about a single time-step obervation y_t. This makes it a bit confusing. As far as I understood, the \tilde{y} is not ever actually instantiated and instead used as a proxy for a likelihood approximation. - line 138: I can not follow this argument. Why is smoothing turned into a filtering problem? Filtering and smoothing solve different problems. Do you instead mean that you have a backward-forward smoothing approach? If so, this needs to be explained and defined better. I am having trouble with the equations (14), (15) and several subsequent equations involving \tilde{y}, again mostly because I do not know what exactly these pseudo-observations are. - It becomes more confusing that the introduction of a different distribution pi(z_t) is supposed to help. Please, can you elaborate on this part? How does the introduction of this distribution in the filtering formulation yield a smoothing/ELBO-based objective? Novelty: - It is not clear how this work compares to some of the more recent works that try to resolve issues with works prior to 2016. I referenced below already some closely related works that I am aware of, but there are several others missing in related work. It is not clear how this work is positioned in comparison to those. - eq (11): using the sum of the two terms (prior and likelihood) is not novel, see e.g. [4], section 3.3, where a Gaussian likelihood approximation is combined with the Gaussian prior dynamics, leading to a sum of corresponding natural parameters. Experiments: - The experimental evaluation is rather weak. There are no proper benchmarks against more recent works. The qualtative results visualized in Figure 3 do not look that great and it is difficult to estimate how hard it is to predict this kind of data. [1] Karl et al. 2016, Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data. [2] Fraccaro et al. 2017, A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning. [3] Becker-Ehmck et al. 2019, Switching Linear Dynamics for Variational Bayes Filtering. [4] Kurle et al. 2020, Deep Rao-Blackwellised Particle Filters for Time Series Forecasting. [5] Klushyn et al. 2021, Latent Matters: Learning Deep State-Space Models. [6] Simo Sarkka, Bayesian filtering and smoothing. ------- Minors and suggestions that did not influence the review score - The complexity statement in the introduction does not explain what the variables T, L, S, and R are. In my opinion, it would be better to describe it in words in the introduction with an emphasis on how it compares to other approaches. - line 45: filtering is an even more common approximation than smoothing. Filtering, smoothing and even fixed-lag smoothing are all valid applications, corresponding to whether inference needs to be online, offline or semi-online. - line 48: vEM needs a citation. For instance, VEM is mentioned in the dissertation of Beal, but there is likely a better reference therein. - line 51: to make the difference to the VAE approach more clear, you might consider emphasizing that the alternative is a *coordinate ascent* approach with *alternating* optimization steps. - line 60: which favorable properties are the ones relevant to this work? I suppose the additive property or anything else? It should be made clear what exactly you refer to. - line 68) typo: ii) -> iii) - Equation (3) typo: parameter - line 78: citation style changes - dKF, dVBF, vEM all written with lower case first letter, but other works (including the original ones) use capitalization. Technical Quality: 3 Clarity: 3 Questions for Authors: - what is the difference between predictions from filtering and smoothing in Fig. 3? The last filter state should be identical to the last smoothing state. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations have been addressed to some extent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our manuscript. Below we respond to your suggestions/questions starting with the weaknesses section. > The mentioned limitations of previous work's limitations are not convincing. Fast evaluation of the loss function is hardly the problem that previous DSSM approaches tried to tackle. Thank you for your remark — one of the reasons fast evaluation of the loss and approximate posterior statistics is not a problem for previous DSSM approaches is that they restrict the approximate posterior covariance to be diagonal. When the diagonal restriction is dropped, then evaluating the KL terms requires linear algebraic operations (log-det, vector multiplication with matrix inverse, trace) that scale cubic with latent dimensionality, $L$. However, structure in the posterior covariance because of our particular sample approximation and low-rank updates make it possible to carry out those operations in linear time. > The related work section also seemed to stop past 2016 Thank you for pointing this out – we are aware of those works and there are many others as well we would have wished to include in the related works section (see [1a,2a,3a] below to name a few) but were ultimately limited by space. We highlighted particular works we did because we aim at general purpose approach with minimum assumptions about the dynamics equipped a scalable inference algorithm; we're not focusing on arguing for any apriori latent structures or dynamics model that are tailored for particular problems. We demonstrated with simple yet general networks to showcase that with a good inference method you can still achieve sota results. Motivated by your comment, we will slightly condense the current related works section so that we can briefly touch upon recent works such as [1-5 and 1a-3a] > Eq. (6) is slightly incorrect: The expectation is taken only over z_{t-1}, however, the terms inside the expectation involve z_{t-1} and z_{t}. We apologize that space was tight and we couldn’t add a few more lines in the main text, but the expectations for Eq(6) do contain the two slice marginals. Factorizing the variational posterior as $q(z_{1:T}) = q(z_1) \prod q(z_t \mid z_{t-1})$, we can write the standard ELBO as follows to arrive at Eq(6) involving the expected KL. $$ \mathcal{L}(q) = \int q(z_{1:T}) \log \frac{p(y_{1:T}, z_{1:T})}{q(z_{1:T})} \, dz_{1:T}$$ $$= \sum \int q(z_t) \log p(y_t\mid z_t) \, dz_t - \sum \int q(z_{1:T}) \log \frac{q(z_t\mid z_{t-1})}{p(z_t\mid z_{t-1})} \, dz_{1:T}$$ $$= \sum \int q(z_t) \log p(y_t\mid z_t) \, dz_t - \sum \int q(z_{t-1}) q(z_t\mid z_{t-1}) \log \frac{q(z_t\mid z_{t-1})}{p(z_t\mid z_{t-1})} \, dz_{t-1, t}$$ $$= \sum \mathbb{E}\_{q_t} \left[ \log p(y_t \mid z_t) \right] - \sum \mathbb{E}\_{q_{t-1}} \left[ \mathbb{D}\_{\text{KL}}(q(z_t\mid z_{t-1}) || p(z_t \mid z_{t-1}) )\right]] $$ For the same smoothing objective in an alternative paper see [4a] Eq.(6). > What exactly is the pseudo-observation \tilde{y}? The pseudo observation $\tilde{y}\_t$ encodes current/future observations, $y_{t:T}$, into a Gaussian potential. Since it is a Gaussian potential, it is specified by parameters that interact linearly with the sufficient statistics – the form of those parameterizations are given explicitly by Eq (25) in the main text. In the text, we tried to make this distinction by writing “Importantly, pseudo-observations defined this way encode the current and future observations of the raw data – an essential component for transforming the statistical smoothing problem into an alternative filtering problem." > line 138: I can not follow this argument. Why is smoothing turned into a filtering problem? It becomes more confusing that the introduction of a different distribution pi(z_t) is supposed to help. Please, can you elaborate Thank you for the question – we say smoothing is turned into a filtering problem because pseudo observations, $\tilde{y}\_t$, encode $y_{t:T}$ into a single Gaussian potential, meaning we can filter pseudo observations to obtain statistics of the smoothed posterior; in other words, we have that $\pi(z_t) \approx p(z_t \mid \tilde{y}\_{1:t}) \approx p(z_t \mid y_{1:T})$ . The reason that we introduce $\pi(z_t)$ is because we want tractable Gaussian marginal approximations, and $q(z_t \mid z_{t-1})$ is conditionally Gaussian. > eq (11): using the sum of (prior and likelihood) is not novel Thank you for pointing this out, but we respectfully disagree that this particular parameterization is not novel. We were inspired by works like [5a] and also [4], to use this type of parameterization; but in those works the dynamics are conditionally linear and it is straightforward to apply Gaussian message passing and recover the posterior – here the dynamics we consider are arbitrary and nonlinear, which led us to develop the nonlinear filtering procedure given by Eqs.(27) & (28) to make this possible. > what is the difference between predictions from filtering and smoothing in Fig. 3? You are correct the last filtering and smoothing state should be identical. Our purpose was to show what decoded hand position/latents would look like during filtering in the case of a streaming data setting. [1a] Ansari et al, 2023. Neural Continuous-Discrete State Space Models for Irregularly-Sampled Time Series. [2a] Li et al, 2021. Scalable gradients for stochastic differential equations. [3a] Schimmel et al, 2022. iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data. [4a] Krishnan et al 2016, Structured Inference Networks for Nonlinear State Space Models [5a] Johnson et al 2016, Composing graphical models with neural networks... Thank you again for your thoughtful suggestions and useful comments. We hope if we have addressed them you might raise your score to reflect that. We are very motivated to position our paper as best possible, and appreciate all of your helpful input. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your reply. > one of the reasons fast evaluation of the loss and approximate posterior statistics is not a problem for previous DSSM approaches is that they restrict the approximate posterior covariance to be diagonal. I agree with this and the favorable scaling is indeed a great contribution. However, I still think that the introduction does not do a great job in introducing the problems of this research area, how this work fits in this area and how it addresses short-comings of the previous works. What is missing is how previous works such as DVBF and DKF are insufficient (due to the diagonal approximation), that subsequent work such as [2,4,5] addresses this issue via conditional linearity and closed-form Kalman filtering/smoothing, how that subsequent work is insufficient, and how this work addresses this limitation. > [...] the expectations for Eq(6) do contain the two slice marginals. [...] You are right, the second expectation I thought was missing is of course contained in the KL. So I agree that Eq. (6) is indeed correct. > The pseudo observation encodes current/future observations into a Gaussian potential. Then I do not understand Eq. (12). Surely, you can approximate the likelihood of the actual data into a Gaussian potential. This is also what is used in the two-filter smoothing formula for linear dynamical systems: you collect the current and future data into a Gaussian potential in the form of the backwards-message, typically denoted by \beta = p(z_t | y_{t:T}). It is also possible to define the smoothing distribution via a backward-forward recursion, using that same \beta. The \beta message will be represented using the natural parameters of this Gaussian potential. Is this what you are doing here? What I don't understand is why you need any pseudo-targets for this and whether these pseudo-targets that take concrete values actually exist. Why not just write \beta or something like g(z_t; y_{t:T}). Or am I missing something? Are these pseudo-targets used for anything? To underscore my point, note also that the RHS of Eq. (12) does not contain any \tilde{y}. > we say smoothing is turned into a filtering problem because pseudo observations \tilde{y}_t, encode y_{1:t} into a single Gaussian potential, meaning we can filter pseudo observations to obtain statistics of the smoothed posterior; As mentioned above, this looks to me like you are doing a backward-forward algorithm for smoothing. But you are right then with the formulation that in this case smoothing is turned into a filtering problem, however I still find pseudo-targets confusing here and I think it is easier to work with the Gaussian potentials or backward messages directly. > Thank you for pointing this out, but we respectfully disagree that this particular parameterization is not novel. [...] here the dynamics we consider are arbitrary and nonlinear, which led us to develop the nonlinear filtering procedure given by Eqs.(27) & (28) to make this possible. Agreed. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their comments and respond below. > However, I still think that the introduction does not do a great job in introducing the problems of this research area, how this work fits in this area and how it addresses short-comings of the previous works. What is missing is how previous works such as DVBF and DKF are insufficient (due to the diagonal approximation), that subsequent work such as [2,4,5] addresses this issue via conditional linearity and closed-form Kalman filtering/smoothing, how that subsequent work is insufficient, and how this work addresses this limitation. We generally agree with the reviewer that some of the relevant advances need to be contrasted to form a better bigger picture for the reader. There are indeed many methods we have not included due to space. The switching LDS style systems such as [4] (or similarly rSLDS and its extensions) are good approximate representations of nonlinear state-space models, though we argue that it is not the naturally interpretable one when continuous state space is assumed (as in our neuroscience experiments). Since [4] is a particle filter like solution, it potentially has a tighter bound like [Zhao et al. 2022] which is another variational filter with constant time complexity. While [5] shares some goals of our work, similar to [4] it faces scalability issue in L. As the reviewer points out, our contribution on the scalability front is solid; for example in [5], the experiments were very small with L=3, 4, 5, and T = 15, 30, while we have L = 40 and T = 35, 130. Moreover, in contrast to previous works, we demonstrate the efficacy of our approach for causal filtering, which is immensely important for conducting causal investigations in neuroscientific settings. To the best of our knowledge, this is the only approach that can do so in large L scenarios. Unfortunately neither [4] nor [5] have publicly available code – we will contact the authors so that we can make proper comparisons; our code is already publicly available, and if accepted, the camera ready will have a link to our codebase to support future investigations and reproducibility. We would like to re-emphasize that our variational inference network structure and the associated novel ELBO (Eq.22) are non-trivial contributions that work together to enable the scalability, predictive performance, principled masking, and the causal real-time filtering. While we agree that SVAE and [2] both can have non-diagonal covariance, they are both based on a linear dynamical system (LDS) at the core – limiting their expressive power substantially; neuroscience data similar to those examined in the manuscript often exhibit dynamics with topological features LDS cannot capture (e.g. multiple fixed points). We will ensure that the related works section of the revised manuscript clearly articulates how these additional works fit into the bigger picture and also disambiguate their differences as you suggest. > Then I do not understand Eq. (12). Surely, you can approximate the likelihood of the actual data into a Gaussian potential. This is also what is used in the two-filter smoothing formula for linear dynamical systems: you collect the current and future data into a Gaussian potential in the form of the backwards-message, typically denoted by \beta = p(z_t | y_{t:T}). It is also possible to define the smoothing distribution via a backward-forward recursion, using that same \beta. The \beta message will be represented using the natural parameters of this Gaussian potential. Is this what you are doing here (yes)? What I don't understand is why you need any pseudo-targets for this and whether these pseudo-targets that take concrete values actually exist. Why not just write \beta or something like g(z_t; y_{t:T}). Or am I missing something? Are these pseudo-targets used for anything? To underscore my point, note also that the RHS of Eq. (12) does not contain any \tilde{y}. We believe we are on the same page and apologize for the confusion. The pseudo-observation $\tilde{y}\_t$ is indeed not instantiated, and is a representation for the natural parameters of the Gaussian potential given by $p(z_t | y_{t:T})$ which we call $\tilde{\lambda}\_\phi(y_{t:T})$. The $\beta\_{t+1}$ and $\alpha\_t$, parameterizing natural parameter updates as in Eq(24), when additively combined form this backward message as described in Eq (23). We realize this notational overhead might cause confusion and if accepted, we will make sure to reduce the notational overhead in the camera ready version. We hope this clears up any confusion. Thank you again for your suggestions and comments; we greatly appreciate the time you have taken to help improve our manuscript.
Summary: Update post author rebuttal. Thanks for the clarifications in the common and personal replies. I will raise the score to Accept, trusting that you will make the improvements you mention. ________ The paper presents a class of non linear state-space models whose dynamics are defined as exponential family distributions. The variational approximation leverages distributions in the same exponential family as the prior, and its parameterization is defined as the sum of prior parameters with a learnable data dependent term. Efficient filtering/smoothing is obtained with an approximate message passing algorithm that exploits the low rank structure of the variational distribution. The model is tested on a number of simulated and real world applications and shows good predictive performances. Strengths: * The paper introduces a theoretically principled model that is quite flexible and can be used in a wide range of applications. As such I believe it can have impact in the neurips community (as long as the code is released) * The model combines ideas from previous work, but is overall novel to the best of my knowledge * The model can handle well missing data * The choice of inference network allows scalable inference * The theoretical section is dense but well explained (unlike the experimental one as noted below) * The predictive performances are better than similar SOTA models Weaknesses: **Main comment** I found the experiment section to be too rushed and therefore hard to understand. The authors should improve its clarity (in case you need space to fit in the 9 pages, you can move some of the the theoretical results in section 4 to the appendix). There are several issues that taken together make the experimental quite hard to follow: 1. In figures 1 and 2 your method is presented as "ours (n)" and "ours (c)", while figure 3 uses "ours (a)" and "ours (c)" . These a, c and n versions are however not defined anywhere, so I'm not sure what I am looking at. 2. Overall the way the figures are combined is messy. Not all of them are mentioned and described in the main text, where they appear in the following quite random order: 2b, 2a, 1, 3c, 3b, 2d. Especially considering the fact that you leave a lot of the explanation to the caption of the figures, it's very hard for the reader to understand which parts of the figures are relevant to look at while reading the different parts of the experimental section. I suggest to rethink the structure of the figures, possibly making a single figure per experiment and making sure they are sufficiently discussed in the main text. 3. Figure 2c mentions the "DMFC RGS" dataset which is not defined in section 5, so not sure what experiment that refers to. Figure 2c is also never mentioned in the main text. 4. In the caption of figure 2a "(top) for fixed S and (bottom) for fixed r" seems wrong. **Other minor points** * Line 36 in the introduction: the complexity term used terms that are not yet defined * The method name (XFADS) only appears in the title and the discussion, but it is never introduced in the Method section, or used in the experimental section. Either you use it, or you can avoid introducing it * Line 145 - you reference the non existing equation (94) * Line 330, missing "the" in "if the number of samples" * Is the format of the citations as a superscript an acceptable one for neurips? It's not very common Technical Quality: 3 Clarity: 3 Questions for Authors: Based on what I wrote in weaknesses section, I would like some clarification on my comments and know how the authors plan to improve the clarity of the experimental section Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful suggestions. We apologize about the density of the paper at times and appreciate that you mentioned the possibility of moving some algorithmic discussion to the appendix in exchange for higher level intuitions and more in depth discussion pertaining to the experimental results. We wholly agree, and as we echoed in the global rebuttal statement we are taking those suggestions to heart and moving some details from the sampling > Based on what I wrote in weaknesses section, I would like some clarification on my comments and know how the authors plan to improve the clarity of the experimental section In addition to what we wrote previously addressing this, motivated by your suggestions, we are restructuring the manuscript to enhance clarity of the experimental section by: - Displaying figures in the same order which the experiments are presented in the text - Explaining the data in more detail, such as the neural data we used to demonstrate model efficacy in monkey reaching and monkey timing (DMFC-RSG) tasks - Interweaving more explanation of the figures into the main text so that important details are not only contained in the captions. - Moving some algorithmic discussion to the appendix and, in turn, fleshing out the significance of the experimental results and giving more exposition. > Is the format of the citations as a superscript an acceptable one for neurips? It's not very common Thank you for asking – we did consult the guidelines beforehand. We have to again thank the reviewer for motivating us to move some algorithmic discussion to the appendix in exchange for space that can be spent on clarifying the practical implications demonstrated in the experimental results section. We hope if we’ve adequately addressed your concerns you might raise your score to reflect that. We are motivated to continue improving the manuscript, and appreciate all of the input you provided. --- Rebuttal Comment 1.1: Title: Score raised Comment: Thanks for the clarifications in the common and personal replies. I will raise the score to Accept, trusting that you will make the improvements you mention. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful consideration and for raising the score to Accept. We are grateful for your positive feedback and are committed to making the improvements you mentioned. Your insights have been incredibly valuable in refining our work, and we will ensure that the final version addresses all points raised in the review.
Summary: This paper introduces a method for scalable nonlinear Gaussian state-space modeling that relies on variational autoencoders and a low-rank covariance matrix assumption for efficient inference by optimizing an approximate variational lower bound. The authors describe the computational benefits of their method and inference scheme and showcase its effectiveness on multiple real-world applications, drawing connections to neuroscience. Strengths: The paper reads clearly and is well-organized. The authors produce convincing experiments, and a novel inference method that scales linearly with the dimensionality of the state space. The authors approximate the filtering distribution with a differentiable approximation for gradient-based updates. The appendix is concise, with relevant and clear background information. The authors clearly motivate their work using real-world applications and a thorough literature review. Weaknesses: No major complaints. The paper is dense and notation heavy at times. It is unclear to me why the authors make a connection to causal amortized inference, and whether this is a natural connection for the scope of the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: How easily extendable is the method to non-gaussian state-space models? More discussion on general methodology? What do the learned low-rank covariance matrices look like? How does amortization affect the quality of the approximate posterior? What if T is small? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: In lines 89-92, the authors list the limitations of amortized inference for state space models. Are there limitations more specific to this particular method that the authors omit? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper and your comments that will undoubtedly increase the quality of our manuscript. > The paper is dense and notation heavy at times. We apologize that the paper could be dense at times. Motivated by this comment and others, in order to enhance the clarity of the manuscript's main take-aways, we will move non-essential algorithmic details to the appendix in favor of extending the discussions of the experimental results section. > It is unclear to me why the authors make a connection to causal amortized inference, and whether this is a natural connection for the scope of the paper. Thank you for this question about causal amortized inference. One of the design choices we made, making it possible to perform causal inference (which we feel has important possible implications in real data-analysis where XFADS can be applied), was segmenting local/backward encoders – in that way filtering can be performed causally using the local encoder; because other sequential VAE models do not make that distinction, we find the ability to perform causal inference a feature of our model that distinguishes it from other approaches. > How easily extendable is the method to non-gaussian state-space models? Thank you for the great question! The convenient part of working with general exponential family representations is that the inference algorithm we presented (sans the Gaussian specific parts) is agnostic to the choice of distribution so long as we have a way of evaluating $\lambda_{\theta}(z_{t-1})$ and $\mu_{\theta}(z_{t-1})$. > What do the learned low-rank covariance matrices look like? Thank you for the question, we did not include these in the main paper due to space constraints, but we feel it would be beneficial to have some example learned covariances from an experiment. In the rebuttal PDF, we show an example trial from the bouncing ball experiment for the models that use nonlinear dynamics; its interesting to see how for example, contact with the wall results in spatially complex covariance structures in latent space that cannot be captured by the diagonal approximations. > How does amortization affect the quality of the approximate posterior? What if T is small? In the case of smaller datasets, smaller T, or amortization networks that are not very expressive, the method could still suffer from overfitting/amortization gaps usually associated with VAEs. > In lines 89-92, the authors list the limitations of amortized inference for state space models. Are there limitations more specific to this particular method that the authors omit? Thank you for pointing this out. We have added to the discussion section some comments on further limitations – “Furthermore, depending on generative model specifications, such as $L$, while the inference framework of Alg.1 is always applicable, modifications of the message passing procedure in Alg.2 might be necessary to maximize efficiency (e.g. if $L$ is small but $S$ is large)." In addition to that, in the related works section we will mention that there will be overhead incurred using low-rank approximations as compared to diagonal ones, but significant savings compared to methods like SVAE that use exact Gaussian message passing. We appreciate the time and effort you put into your review. We hope that if you feel we have addressed the issues raised you might raise your score to reflect that. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed, thorough rebuttal, and address some of their comments below. > Thank you for this question about causal amortized inference. One of the design choices we made, making it possible to perform causal inference (which we feel has important possible implications in real data-analysis where XFADS can be applied), was segmenting local/backward encoders – in that way filtering can be performed causally using the local encoder; because other sequential VAE models do not make that distinction, we find the ability to perform causal inference a feature of our model that distinguishes it from other approaches. Thank you for the clarification. > The convenient part of working with general exponential family representations is that the inference algorithm we presented (sans the Gaussian specific parts) is agnostic to the choice of distribution so long as we have a way of evaluating $\lambda_{\theta}(z_{t-1})$ and $\mu_{\theta}(z_{t-1})$. Thank you for the clarification. If I understand correctly, fast inference relies on the reparameterization trick for approximating the reconstruction term in the evidence lower bound. For exponential family distributions that are not reparametrizable, what variance reduction techniques do you recommend? > In the rebuttal PDF, we show an example trial from the bouncing ball experiment for the models that use nonlinear dynamics; its interesting to see how for example, contact with the wall results in spatially complex covariance structures in latent space that cannot be captured by the diagonal approximations. Very interesting. I request the authors include at least one example figure in the appendix (no need to compare to other methods here) so the readers may observe the low rank, non-diagonal covariance structures. > We have added to the discussion section some comments on further limitations – “Furthermore, depending on generative model specifications, such as $L$, while the inference framework of Alg.1 is always applicable, modifications of the message passing procedure in Alg.2 might be necessary to maximize efficiency (e.g. if $L$ is small but $S$ is large)." In addition to that, in the related works section we will mention that there will be overhead incurred using low-rank approximations as compared to diagonal ones, but significant savings compared to methods like SVAE that use exact Gaussian message passing. Thank you for the additional comments regarding the limitations of the work. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their prompt response and helpful suggestions. We address your comments below. > Thank you for the clarification. If I understand correctly, fast inference relies on the reparameterization trick for approximating the reconstruction term in the evidence lower bound. For exponential family distributions that are not reparametrizable, what variance reduction techniques do you recommend? Thank you for the question. You are right that fast inference relies on the reparameterization trick. In cases where the standard reparameterization trick cannot be applied, an alternative (equivalent to extended kalman filter in the Gaussian case) would be to linearize the dynamics function in mean parameter space, $\mu\_{\theta}(z_{t-1})$, about the sufficient statistics $\mathcal{T}(z_{t-1})$ so that $\mu\_{\theta}(z_{t-1}) \approx F \mathcal{T}(z_{t-1}) + f$, letting us evaluate the predict step equation $\mathbb{E}[\mu\_{\theta}(z_{t-1})] \approx \mathbb{E}[F \mathcal{T}(z_{t-1}) + f] = F \mu\_{t-1} + f$. Alternatively, it would also be viable to use implicit reparameterization gradients[1] when evaluating the ELBO. Motivated by this question, we will include additional discussion related to these alternatives in the appendix. [1] Figurnov et al. 2018. Implicit reparameterization gradients > Very interesting. I request the authors include at least one example figure in the appendix (no need to compare to other methods here) so the readers may observe the low rank, non-diagonal covariance structures. Thank you for the original suggestion — we will make sure to include in the appendix of the manuscript the bouncing ball covariances from the rebuttal PDF as well as visualizations of the latent state covariance for some of the other experiments. Thank you again for your thoughtful questions and constructive feedback. We appreciate your time and effort in reviewing our response and believe the revisions will strengthen the manuscript.
Summary: The paper presents a novel state space model (SSM) framework for learning dynamical systems with nonlinear latent dynamics. The proposed method borrows inspirations from structured variational autoencoders (SVAEs) and sample-based nonlinear Bayesian filtering, using low-rank corrections to the prior to capture information from the observations and compute approximations to the variational posterior. This framework addresses limitations in current methods which fall short on model expressivity or predictive power. Strengths: This paper builds upon previous work very well while addressing their weaknesses adequately. The authors show a high level of technical understanding regarding Bayesian filtering and variational inference, and from what I have read the methods proposed are technically sound (I did not go over all of the math in the appendix). As far as I’m aware, the proposal of a structured inference framework for exponential family SSMs that allows for learned non-linear dynamics is a good contribution that fills a gap in the literature. The paper is also professionally written and contains few mistakes or typos. Overall a good paper with solid contributions. Weaknesses: - Main technical ideas are conveyed in a convoluted way. The main paper is unnecessarily dense, making it hard to read and understand. One suggestion is to keep only the core equations (such as the objective (21)(22), the low rank parameterization (24), etc.) and provide more high-level discussions on the intuition behind the model. For example, the author could talk about the relationship between the pseudo-observations and the potentials in the SVAE paper, and give intuition on how to understand these quantities. Technical details such as the sample-based approximate inference and the corresponding low-rank updates are pretty standard in my opinion and should be left in the appendix. - Some related work is missing. From my understanding, the authors use sampling-based linear Gaussian approximations for inference in the nonlinear dynamical system. This is closely related to the extended Kalman filter (EKF) which uses linear approximations to the dynamics based on the Jacobian, and the unscented Kalman filter (UKF) which uses anchor points instead of random samples to estimate the mean and covariance of the predicative distribution. There have also been attempts to combine neural networks with these nonlinear Bayes filtering methods such as (Liu et al., 2024). - Somewhat limited experimental verification. While I do think that the existing results conveys the effectiveness of the proposed methods well, I do wish that more in-depth comparisons and discussions can be made. For example, how well do the low-rank update scale to higher-dimensional systems? Comparisons like this gives the reader a better sense of the tradeoffs of the method compared to others in the literature. Overall, I consider these points to be relatively minor and do not detract from the paper's contributions. Liu, Wei, et al. "Neural extended Kalman filters for learning and predicting dynamics of structural systems." *Structural Health Monitoring* 23.2 (2024): 1037-1052. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can this method be efficiently parallelized? - What are the practical computational efficiency of this method compared to the DKF or the SVAE in wall clock time? Are the computational overheads for the sampling-based inference significant in practice? - In figure 1, the DVBF and SVAE seems to be underperforming the DKF in the pendulum experiments, despite the pendulum motion closely resembling a linear dynamical system. Why is this the case? - How well do the low-rank update scale to higher-dimensional systems? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time and effort and asking questions that we feel have helped make the manuscript stronger. Below we respond to them in order, > One suggestion is to keep only the core equations (such as the objective (21)(22), the low rank parameterization (24), etc.) and provide more high-level discussions on the intuition behind the model. The author could talk about the relationship between the pseudo-observations and the potentials in the SVAE paper, and give intuition on how to understand these quantities. Thank you for this suggestion — we understand the paper is dense and building more intuition will benefit the reader. Motivated by your suggestion along with those of reviewer UuWx and others, we are moving some algorithmic details from section 3 to the Appendix in favor of higher level intuitions and clarity of the experimental results. > Some related work is missing. From my understanding, the authors use sampling-based linear Gaussian approximations for inference in the nonlinear dynamical system. This is closely related to the extended Kalman filter (EKF) which uses linear approximations to the dynamics based on the Jacobian, and the unscented Kalman filter (UKF) which uses anchor points instead of random samples to estimate the mean and covariance of the predicative distribution. Thank you for bringing up other approximate filtering methods. This led to the discovery that the approximate predict step of Eq(23), when dynamics are nonlinear Gaussian, coincides with the statistically linearized Kalman filter [1] Thanks to your suggestion, we add an additional line after introducing the approximate filter to clarify this connection: “...otherwise, Monte Carlo integration can provide a differentiable approximation; for nonlinear and Gaussian dynamics this leads to a predict step equivalent to the statistically linearized Kalman filter [1].” > Somewhat limited experimental verification. While I do think that the existing results conveys the effectiveness of the proposed methods well, I do wish that more in-depth comparisons and discussions can be made. For example, how well do the low-rank update scale to higher-dimensional systems? Comparisons like this gives the reader a better sense of the tradeoffs of the method compared to others in the literature. These are great suggestions and as per the earlier remark, we are moving non-essential algorithmic details into the appendix. We will be using the extra space to help build extra intuition – for example, elaborating on Fig.2(a,b) and emphasizing the favorable scaling illustrated in the figure. We have also run experiments using the latent SDE of [3] as an additional point of comparison(Fig.1 rebuttal PDF). > What are the practical computational efficiency of this method compared to the DKF or the SVAE in wall clock time? Are the computational overheads for the sampling-based inference significant in practice? Thanks for this question -- it has motivated us to expand on this in the discussion section as well as add other methods wallclock time to Fig.1a for reference . To answer: compared to methods like DKF/DVBF that use diagonal approximation there is additional overhead since our method scales $\mathcal{O}(LSr)$ per step, however, there are significant savings compared to SVAE which scales $\mathcal{O}(L^3)$ per step due to its dense marginal covariances. > In figure 1, the DVBF and SVAE seems to be underperforming the DKF in the pendulum experiments, despite the pendulum motion closely resembling a linear dynamical system. Why is this the case? It is true that the pendulum closely resembles an LDS, but in this case, the small angle approximation does not hold [2] which would make long-term forecasting difficult for a model using a linear dynamical system prior. This is one of the reasons that all methods perform very well in the smoothing metric but not prediction metric; especially SVAE where the linear dynamics approximation degrades as the horizon increases. > Can this method be efficiently parallelized? Things can be easily parallelized across batches, but to parallelize across time would require a different inference scheme (since we sequentially propagate samples through the dynamics). However, one possibility is to use a parallelizable inference network architecture (such as S4 [4]), to produce marginals and train it as usual using Eq.(22) > How well do the low-rank update scale to higher-dimensional systems? I do wish that more in-depth comparisons and discussions can be made. For example, how well do the low-rank update scale to higher-dimensional systems? Comparisons like this gives the reader a better sense of the tradeoffs of the method compared to others in the literature Thank you for these questions – we aimed to elucidate some aspects through conducting the experiments featured in the left half of figure 2. Motivated by your previous comment, we will now have more room to explain the significance of that experiment in showing: i) purposely exploiting covariance structures makes it possible to develop a filtering algorithm scaling linear in the latent dimension, L — compared to the O(L^3) cost per step incurred for typical Gaussian filters. and ii) how low-rank covariance parameterizations can also lead to a tighter ELBO. We would like to thank the reviewer again for their time and useful suggestions that will serve to make our manuscript stronger. We hope that if we have addressed your concerns you might raise your score to reflect that, and we are eager to address any concerns or questions that might remain. [1] Sarkka, Bayesian filtering and smoothing. [2] Zhao and Lidnerman, ICML 2023. Revisiting Structured Variational Autoencoders [3] Li et al, 2021. Scalable gradients for stochastic differential equations [4] Gu et al, 2021. Efficiently Modeling Long Sequences with Structured State Spaces --- Rebuttal Comment 1.1: Title: Last day for discussion Comment: Reviewer e1Je, today is the last day for discussion. I hope you will respond to the authors' rebuttal. --- Rebuttal Comment 1.2: Comment: I would like to thank the authors for their thorough response. I am glad that the authors find my suggestions on the paper organization and related works helpful. Trusting that the authors will make the promised changes and provide more discussion on the benefit and scaling properties of the low-rank approximation, I have raised my score to a 7.
Rebuttal 1: Rebuttal: First, we would like to thank all of the reviewers for their time, effort, and helpful comments regarding the submitted manuscript – we feel many of your suggestions have led us to changes and additions that better position the paper. Many reviewers were positive about the clarity of writing, but felt that the paper was a bit dense at times and that room taken up by some algorithmic details could be used instead to deliver more insightful discussion in the experimental results section. We agree with this sentiment and are very happy it was brought to our attention – after reading through all of the reviews, we will make the following changes that will help to elucidate the significance of the experimental results section. - Some equations from the sample approximation structure will be delegated to the appendix which will help to i) create space for higher level insights and discussion and ii) keep focus of the reader away from these lower level details that might detract from the main message. - We will use the extra space for extra discussion regarding the results; for example, Fig.2a/b demonstrate the favorable scaling of our inference algorithm, how low-rank approximations scale, and convergence of the ELBO for different parameter settings – now we are able to drive these points home. - As an additional point of comparison, we have run pendulum/bouncing-ball/monkey reaching experiments using the latent SDE method of [1] – for a total of 5 latent variable models that we compare against. In the rebuttal figure, the additional results include: - Collated results featuring the new results from applying the latent-SDE method to experiments considered in the paper - Additional analysis of the model learned from the monkey reaching data set, showing the potential impact of having a highly predictive model when applied to motor control or brain computer interfaces. - Examples of the learned posterior covariances of the nonlinear state-space methods in comparison with our own on the bouncing ball dataset. We would also like to take some room to point out what we believe are important technical contributions, and the expected impact this work could have. ## technical contributions To the best of our knowledge, the state-space variational autoencoding framework developed in this manuscript is the only one to allow the recovery of non-trivial covariance over space while scaling well to higher dimensions when general form (possibly nonlinear) dynamics are used. Gaussian message passing naively scales O(L^3) per time step, but exploiting the sample approximation and low-rank update structures let us develop an O(LSr) complexity algorithm – as a further point, infinite horizon Kalman filters (which assume steady-state covariance) have been classically used when L becomes large and still scale O(L^2). By being able to filter in O(LSr) time per step, we open the door for high-dimensional filtering with tunable fidelity knobs given by S (# of samples in predict step) and r (rank of update step precision update) without restrictions to trivial covariance structures. We also view introducing the distinction of specialized local/backward natural parameter encoders in the context of state-space models as a novel contribution for several reasons. i) This makes it possible to use the causal variant of the model after it has been trained, and deploy it in streaming data settings to perform online filtering with no modifications needed. ii) As also discussed in the manuscript, this allows missing data to be seamlessly handled as a result of their natural parameter representation. iii) This distinction also makes it possible to avoid parameterizing a neural network for the local encoder when the observation model is conjugate to the dynamics, since the optimal local encoder could be found in closed form as $\alpha_t = \nabla_{\mu_t}\mathbb{E}\_{q_t}[\log p(y_t \mid z_t)]$. We hope also this distinction can help lead to better architectural design choices for inference networks in the context of amortized inference. ## expected impact Our algorithm and parameterization of the variational posterior allows for causal inference that is suitable for real-time applications and supports recovery of temporally causal states. For example, in the manuscript we show how on numerous examples, the inference framework we propose learns models that are much more predictive than other deep state-space models. Applied to the monkey reaching data in Fig.3 we showed how when applied to real neural recordings our learned model was able to successfully *predict* reaching behavior (which it was not trained on). We support these results further in the rebuttal PDF, showing accurate movement speed predicted from the model long before time of the movement onset. Thank you again to all of the reviewers for their comments and carefully thought out suggestions. We hope that we can have a productive dialogue and continue to address any remaining concerns that the reviewers might have and position this work as best as possible. Best, The authors Pdf: /pdf/2d33d70263dea4c62a93a8dc3956288d473ce982.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An Improved Empirical Fisher Approximation for Natural Gradient Descent
Accept (poster)
Summary: The paper proposes a modification of the empirical Fisher matrix that re-scales a datum's parameter gradient by its logit gradient. This aims to remove a bias in the empirical Fisher, dubbed 'inversely-scaled projection issue', which benefits data points that have already been learned. While the modified empirical Fisher does not introduce any significant cost, its approximate natural gradient is shown to be closer to the true natural gradient of the empirical Fisher and a Monte-Carlo approximated Fisher, and is more robust to the selection of damping. Strengths: - **Clarity & Evaluation:** The text is clearly written, and the proposed modification is theoretically justified. The experiments look thorough to me (error bars, modern fine-tuning applications), are clearly analyzed, and support the paper's main claims. - **Effectiveness:** The proposed modification does not add significant computational overhead, but is shown to significantly improve the proximity to the true natural gradient. It is easy to integrate into existing approximate natural gradient optimizers and could therefore serve as promising direction for the development of future approximate NGD methods. Weaknesses: - **Explanation for better approximation of NG update:** The paper demonstrates empirically that their EF modification leads to pre-conditioned gradients which are closer to the true natural gradient. To me, it is not completely clear how addressing the EF's 'inversely-scaled projection issue' causes this effect. The authors could further strengthen their paper by providing a theoretical argument why removing the empirical Fisher's bias towards learned data points results in pre-conditioned gradients with stronger resemblance to the true natural gradient. One way to do this could be by considering the distributions over which the gradient covariances are taken. The empirical Fisher uses the data distribution, and the true Fisher uses the likelihood implied by the model. It might be interesting to look into whether one can make statements how the proposed modification changes the data distribution, and whether it brings it closer to the likelihood used in the true Fisher. - **Optimizer scalability \& experimental scope:** The authors focus on evaluating their modification in the 'exact' setting, i.e. without introducing additional structural approximations like block diagonals or Kronecker factorizations. The implementation needs to hold the per-sample gradients in memory, which is costly and renders the optimizer impractical for full training of large architectures. I think this is to some extent okay for the scope of this paper, because it is relatively obvious how one would apply these structural approximations on top of the proposed EF modification. However, the current experiments focus heavily on fine-tuning settings and I wonder if the findings hold for other settings, too. I would be convinced further about the findings if the authors provided an additional experiment for 'traditional' image classification (say CIFAR100 or CIFAR10) with a ResNet (say ~10M parameters). It should be possible to scale the implementation to this setup. I believe a good starting point are the hook-based approaches of the BackPACK [1] and ASDL [2] frameworks, in combination with the low-rank tricks described in [3, 4], which instead of storing the per-sample gradients, compute the pre-conditioner matrices from layer inputs and output gradients using less memory. **References** [1] Dangel, F., Kunstner, F., & Hennig, P. (2020). BackPACK: Packing more into Backprop. ICLR. [2] Osawa, K., Ishikawa, S., Yokota, R., Li, S., & Hoefler, T. (2023). ASDL: A unified interface for gradient preconditioning in PyTorch. [3] Dangel, F., Tatzel, L., & Hennig, P. (2022). ViViT: curvature access through the generalized Gauss-Newton's low-rank structure. TMLR. [4] Yang, M., Xu, D., Wen, Z., Chen, M., & Xu, P. (2022). Sketch-based empirical natural gradient methods for deep learning. Journal of Scientific Computing. Technical Quality: 3 Clarity: 3 Questions for Authors: - Q1: Are there any assumptions on the number of neural network outputs for Theorems 5.3, 5.4? You say they are extensions of the proofs in [48], which I believe are stated for single-output NNs. - Q2: How does iEF/EF work in settings with sequence-valued predictions per datum and does the computational cost scale in the sequence length? - Misc: Some typos & editing suggestions - L71: 'has gained' -> 'have gained' - Fig. 1: Please add somewhere in the caption that $N=2$ - L153: 'NGD' instead of 'GDN'? - L306: 'suffer' -> 'suffers' Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Author Response to Reviewer Khnn Thank you for your review! >Explanation for better approximation of NG update...To me, it is not completely clear...why removing the empirical Fisher's bias towards learned data points results in pre-conditioned gradients with stronger resemblance to the true natural gradient... One way to do this could be by considering the distributions over which the gradient covariances are taken... We agree that the statistical framework is commonly used to analyse the approximation quality to NG updates. But such a framework cannot be easily applied to analyse iEF because iEF is primarily motivated from an optimisation/geometric perspective. However, it is important future work to find theoretical support for iEF from a statistical standpoint. As stated in Sec. 5.2, the iEF update is designed to exactly match the loss reduction behaviour of the GN algorithm, which reduces the loss for each sample according to their logits-level gradient norm. This allows iEF to be "curvature-aware", and perform more agressive updates for untrained samples (with large gradient norm) and conservative updates for converged samples (with small gradient norm). This curvature-awareness helps iEF to better approximate second-order methods, including the NGD method. In contrast, due to the inversely-scaled projection issue, EF updates are essentially "anti-curvature" and perform agressive updates for converged samples and consevative updates for untrained samples. This clearly makes EF a worse second-order method than iEF. It is worth mentioning that, such curvature-awareness may not hold for SGD updates, which may even increase the loss of some samples (depending on the covariance of the per-sample gradients) in a training batch. >Optimizer scalability & experimental scope: The authors focus on evaluating their modification in the 'exact' setting...I think this is to some extent okay for the scope of this paper...However...I would be convinced further about the findings if the authors provided an additional experiment for 'traditional' image classification (say CIFAR100 or CIFAR10) with a ResNet (say ~10M parameters)...I believe a good starting point are the hook-based approaches...in combination with the low-rank tricks. We note the reviewer's understanding of our "exact" experimental setup. Also, the reviewer has provided suggestions regarding scaling up to larger setups. We are already using our implementation of hook-based methods to quickly compute per-sample gradients. However, it should be easier if we use the recommended BACKPACK codebase [1] to extend exact iEF to other model structures. Also, we appreciate the approximation techniques recommended by the reviewer, as making iEF an efficient optimiser through approximation is an important future work. We agree that an additional train-from-scratch image classification setup with a larger model would be beneficial to support the claims in the main paper. We have conducted an experiment for CIFAR10 + large MLP (~10M parameters), which further validates the claims in the main paper. Please refer to ***AE2 of the Global Author Response*** for the detailed experiment setup and results. Note that in this setup we did not need to apply any approximations, as we are able to fit the per-sample gradients of each batch into our GPU RAM. [1] Dangel, F., Kunstner, F., & Hennig, P. (2020). BackPACK: Packing more into Backprop. ICLR. >Q1: Are there any assumptions on the number of neural network outputs for Theorems 5.3, 5.4? There are no assumptions on the number of neural network outputs. This extends the proof in [2] where a regression (scalar-output) model is assumed. [2] G. Zhang et al (2019). “Fast Convergence of Natural Gradient Descent for Over-Parameterized Neural Networks”. >Q2: How does iEF/EF work in settings with sequence-valued predictions per datum and does the computational cost scale in the sequence length? We focus on classification problems where there is only one output distribution per-datum in the main paper. For sequence-valued predictions, there are multiple output distributions per-datum. Assume teacher-forcing is used for training, where the objective function is cross-entropy and a label is provided for every output time-step (assume $T_o$ output time-steps for each datum). There are two ways of applying EF/iEF to this setup. If the output sequence is treated as a single probability distribution, the EF update can be computed according to Eqn (3) (with the single label $y_n$ now replaced with the target output sequence $(y\_n)\_1, (y\_n)\_2...(y\_n)\_{T\_o}$). Similarly, for iEF, by treating the model output logits as a whole, iEF can still be computed following Eqn (8). Note that now for the iEF scaling factor $\|\nabla_{z_n}l_n\|_2^2$ of the $n$-th datum, output logits $z_n$ becomes the concatenated logits vector for all output time-steps, and the loss $l_n$ becomes the accumulated cross-entropy loss for all output time-steps. In this case, for both EF/iEF, the time complexity is constant per-datum. Alternatively, every output time-step is treated as an independent classification problem ($T_o$ in total for each datum). Both EF/iEF will now require $T_o$ gradients per-datum (one for each output time-step). In this case, the time complexity becomes $O(T_o)$ per-datum, which scales linearly with sequence length. The effectiveness of these methods requires further investigation and is left to future work. > Editing suggestions... We agree with all the editing suggestions by the reviewer, and will fix them in the final version of the paper. --- Rebuttal Comment 1.1: Title: Follow-up comments Comment: Dear authors, thank you for your detailed response. I appreciate your attempt to re-explain why iEF might better approximate the true NG than EF and the additional visualizations. Regarding the from-scratch experiment: I understand the constraints given the limited time. However, the the three-layer MLP on CIFAR10 is weak evidence to me, as the architecture is rather synthetic, achieves only mediocre performance (~60% accuracy), and would therefore not be used by a practitioner to train on this data set. Therefore, I still see the necessity to implement and evaluate iEF on networks with convolution layers that are somewhat representative for what a practitioner would do on CIFAR10 (maybe one of the nets from https://github.com/kuangliu/pytorch-cifar that reach ~90% accuracy). --- Rebuttal 2: Comment: # Further Response to Reviewer Khnn Thank you for your comments. > Regarding the from-scratch experiment: I understand the constraints given the limited time. However, the the three-layer MLP on CIFAR10 is weak evidence to me, as the architecture is rather synthetic, achieves only mediocre performance (~60% accuracy), and would therefore not be used by a practitioner to train on this data set. Thank you for understanding the time constraints that we are experiencing. We note that the reviewer considers our MLP train-from-scratch experiment (***AE2*** of Global Author Response) to be "synthetic" and not "practical", and therefore provides "weak evidence". However, we believe ***AE2*** still provides reasonable evidence for the effectiveness of the iEF method in the train-from-scratch scenario for the following reasons: 1. Our MLP architecture is an extension of the MLP setup used in [1]. The main difference in our setup is that we used a hidden layer size 4 times larger than [1] (to reach the recommended 10M parameter size from the reviewers) and we used no dropout in ***AE2***. To better compare our setup with [1], we conducted additional experiments (1-seed) for our setup with a dropout of 0.1 (determined by a grid-search) and the Adam baseline reached a test accuracy of 60.7% and iEF reached 62.2%. Considering the larger capacity of our model, we believe that these results are comparable to the 56.8% of the Adam baseline in [1], indicating that our setup is ***well-tuned***. 2. Although the test accuracy is far from the SOTA performance for CIFAR10, we believe ***AE2*** yields useful insights for a large train-from-scratch scenario as long as the setup is well-tuned and the comparison among various optimisers is fair. We believe we have achieved both in ***AE2*** [1] Mahesh Chandra Mukkamala, Matthias Hein. (2017). Variants of RMSProp and Adagrad with Logarithmic Regret Bounds. ICML 2017. > Therefore, I still see the necessity to implement and evaluate iEF on networks with convolution layers that are somewhat representative for what a practitioner would do on CIFAR10 (maybe one of the nets...that reach ~90% accuracy). Although we believe that our MLP experiment can provide reasonable evidence for the behaviour of our proposed method in a train-from-scratch setup, we agree with the reviewer that it would be ideal if we could provide experimental results for a better-performing CNN model on CIFAR10. Therefore, we have been working on implementing a CIFAR10 + ResNet32 [2] (we did not include batch normalisation layers due to time constraints) setup and we have now completed our first experiments with it. In this setup, iEF achieved the highest test accuracy of 85.6%, which is close to the ~90% mentioned by the reviewer and is significantly better than the 58.6% in ***AE2***, making this a much more practically useable setup. The experimental results further validate our claims in the main paper and the details are provided in the section ***AE4: Train-from-scratch Experiment for CNN*** in the Official Comment to the Global Author Response. [2] Kaiming H, Xiangyu Zhang, Shaoqing Ren, Jian Sun. (2015). Deep Residual Learning for Image Recognition.
Summary: This paper proposed a new Natural Gradient Descent algorithm called iEF. The authors conduct a theoretical convergence analysis of iEF. The proposed PSMGD method achieves comparable or even superior performance over existing gradient descent methods. Strengths: 1. The paper is fairly easy to read, and the ideas are presented well. 2. The inversely scaled projection issue of EF proposed in this paper is original. 3. The proposed iEF method achieves better performance compared with SGD, EF, and SF methods. 4. The theoretical analysis is comprehensive and the results are easy to understand. Weaknesses: 1. I think some of the descriptions are not clear enough and difficult to understand 2. The experiment is not sufficient, and the results are not very impressive. I explain the above weaknesses in the **Questions** part. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. I think the reason why the authors use Eq. (7) to solve the inversely-scaled projection issue raised in Lines 135-136 is not well explained. Why multiply "logits-level gradient norm" can resolve the inversely-scaled projection issue? 2. Though the author says that the proposed scaling vector can be considered as an efficient approximation to the GN algorithm in Section 5.2. This still can not explain why this scaling vector can resolve the inversely-scaled projection issue as the GN algorithm is not proposed for resolving the inversely-scaled projection issue. Additionally, I do not find the definition of the GN matrix and the claim in Lines 175-176 from the reference [26]. Can the authors tell me the detailed location? 3. In Remark 202-204. The authors say "This means that iEF method does not get stuck in local minima, unlike the gradient descent method.". I think most gradient descent methods, can achieve the global minima when the target model uses an m-strongly convex objective function. 4. The experimental result is not sufficient. The results for the AdamW and Adafctor methods are not complete. The proposed iEF method achieves comparable performance with AdamW. Some SOTA baselines such as CAME optimizer and Sophia optimizer are missing. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The paper discusses the limitations and there is no societal impact of the work performed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Author Response to Reviewer 4Ck3 Thank you for your review! > I think the reason why the authors use Eq. (7) to solve the inversely-scaled projection issue raised in Lines 135-136 is not well explained...GN algorithm is not proposed for resolving the inversely-scaled projection issue. We note that it could be better explained how iEF resolves the inversely-scaled projection issue of EF and we provide an alternative clearer explanation here. The inversely-scaled projection issue of EF is mainly caused by its per-sample loss reduction $\Delta \mathbf{l_\text{EF}} = -\eta\mathbf{1}$ (see equation below line 129), which enforces an equal loss reduction for all training samples without considering their individual level of convergence. This causes EF updates to overly focus on converged samples (with small $||\nabla_\theta l_n||$) while ignoring untrained samples (with large $||\nabla_\theta l_n||$), as is shown by the projection term $\kappa_\text{EF}$ in Eqn (6). To resolve this issue, we improve the per-sample loss reduction behaviour of the EF update to account for the individual convergence level of each sample. The per-sample loss reduction behaviour of iEF is designed to be the same as a GN update: $\Delta \mathbf{l_\text{iEF}} = -\eta||\nabla_{z_n}l_n||^2$ as is shown in Eqn (11). This allows iEF updates to be "curvature-aware", and focus on reducing the loss of untrained samples (with large $||\nabla_{z_n}l_n||$) while be conservative to converged samples (with small $||\nabla_{z_n}l_n||$). Note that $||\nabla_{z_n}l_n||$ is believed to be a good indicator of the convergence level of a sample because the objective function $l_n(z_n)$ is convex, and $||\nabla_{z_n}l_n||$ in general becomes smaller as the sample approaches its minimum. We feel it would be easier for readers to understand the motivation of iEF if we use Eqn (11) (the per-sample loss reduction) for motivation instead of Eqn (7) (the per-sample projection), and we are happy to make such edits in the final version of the paper. Regarding the GN algorithm, the reviewer is correct that it is not proposed to resolve the inversely-scaled projection issue. However, the GN algorithm performs well and is a widely-studied second-order optimiser, particularly for traditional ML setups (and was proposed much earlier (1809) than both NGD (1997) and EF (2007)). The iEF update is designed to match the loss reduction behaviour of the GN algorithm mainly to learn from "a successful teacher". > I do not find the definition of the GN matrix and the claim in Lines 175-176 from the reference [26]. Can the authors tell me the detailed location? Reference [1] below ([26] in main paper) states that the GN algorithm is equivalent to the $L^2$ NGD (which means it searches on the $L^2$ output space, i.e. $z$-space), which is mentioned in Section 2.6 of [1]. The GN matrix was introduced as the $L^2$ information matrix $G^{L^2}(\theta)$ in section 2.1 of [1] (between Eq. (2.6) and (2.7)). [1] Levon Nurbekyan, Wanzhou Lei, and Yunan Yang. (2023). “Efficient Natural Gradient Descent Methods for Large-Scale PDE-Based Optimization Problems” > In Remark 202-204. The authors say "This means that iEF method does not get stuck in local minima, unlike the gradient descent method.". I think most gradient descent methods, can achieve the global minima when the target model uses an m-strongly convex objective function. The "m-strongly convex objective function" in Assumption C.2 means: for the $n$-th sample, the function between the model output $z_n$ and per-sample loss $l_n$ is "m-strongly convex". As is stated in lines 652-653, under such an assumption, ***the per-sample loss w.r.t. model parameters can still be arbitrarily non-convex*** (depending on the exact structure of the model, i.e. the function between model parameter $\theta$ and model output $z_n$). Consequently, gradient descent methods, which operates on the non-convex model parameter space $\theta$, may not be able to converge to global minima under this assumption. > The experimental result is not sufficient. The results for the AdamW and Adafctor methods are not complete...Some SOTA baselines such as CAME optimizer and Sophia optimizer are missing. We conducted additional baseline experiments (Adafactor, CAME, Sophia for LoRA+GLUE, AdamW for PT+GLUE) for 5 selected GLUE tasks, and the validation performance (3 seeds) is reported in the following tables (following the style of Table 6 in the main paper). The following conclusions can be drawn from these experiments: 1. AdamW < Adafactor in PT, Adafactor < AdamW in LoRA for 4/5 tasks. That is why these results were left out in the main paper. 2. In LoRA, CAME > AdamW for 3/5 tasks, Sophia > AdamW for 2/5 tasks, iEF > CAME for 3/5 tasks, iEF > Sophia for 4/5 tasks. The addition of the two new baselines does not change the conclusion of the paper too much, but we are happy to include these new baseline results in the final paper. **Prompt Tuning** |Method|CoLA|SST-2|MRPC|QQP|RTE| |-|-|-|-|-|-| |Adafactor|$82.0\pm0.44$($53.8\pm1.01$)|$94.2\pm0.25$|$84.8\pm0.35$($88.0\pm0.73$)|$90.7\pm0.01$($87.7\pm0.01$)|$64.7\pm0.34$| |AdamW|$81.7\pm0.53$($55.7\pm1.25$)|$94.2\pm0.07$|$83.1\pm1.36$($87.8\pm0.32$)|$89.8\pm0.27$($86.6\pm0.32$)|$62.3\pm4.85$| **LoRA** |Method|CoLA|SST-2|MRPC|QQP|RTE| |-|-|-|-|-|-| |AdamW|$83.1\pm0.15$($58.7\pm0.55$)|$94.9\pm0.07$|$88.6\pm0.51$($91.9\pm0.26$)|$90.0\pm0.16$($86.8\pm0.06$)|$83.4\pm1.06$| |iEF|$83.4\pm0.24$($59.5\pm0.64$)|$94.9\pm0.21$|$88.5\pm0.88$($91.8\pm0.55$)|$89.9\pm0.99$($86.8\pm0.12$)|$81.7\pm0.55$| |Adafactor|$82.4\pm0.22$($57.8\pm0.52$)|$94.3\pm0.29$|$86.2\pm0.99$($90.4\pm0.57$)|$90.6\pm0.12$($87.5\pm0.17$)|$75.6\pm0.91$| |CAME|$83.3\pm0.25$($59.2\pm0.75$)|$94.8\pm0.23$|$89.1\pm0.79$($92.1\pm0.63$)|$90.4\pm0.06$($87.3\pm0.05$)|$59.3\pm3.98$| |Sophia|$83.2\pm0.58$($59.2\pm1.36$)|$94.8\pm0.23$|$88.4\pm0.37$($91.8\pm0.31$)|$90.1\pm0.16$($86.9\pm0.19$)|$75.6\pm0.42$| --- Rebuttal Comment 1.1: Comment: **R1**: I think there still exists a gap between the inversely scaled projection issue and the used strategy. However, this explanation is somehow clearer and more related to the problem the author found. **R2**. Thanks for your reply. I have found the corresponding information. **R3**. I found that assumption C.2 is included in the appendix. I suggest the author claim this reply in the revision to avoid misunderstanding. When we try to provide a convergent result, we want to reach the global optimal or the stationary point w.r.t. to the model parameter. Is Theorem 5.4. related to this? I did not find the definition of $l_n(t)$. Is it equal to $l_n(\theta(t))$. So $l_n^*$ is related to the optimal model parameter $\theta^*$? Moreover, is there any connection between strongly convex objective function w.r.t model output and model parameter? **R4**. Though these tasks are parts of the GLUE tasks, still thanks for providing these additional results. --- Rebuttal 2: Comment: # Further Response to Reviewer 4Ck3 Thank you for your prompt follow-up! > R1: I think there still exists a gap between the inversely scaled projection issue and the used strategy. However, this explanation is somehow clearer and more related to the problem the author found. We are glad that the reviewer finds our explanation clearer. However, we are not sure exactly what you mean by "gap between the inversely scaled projection issue and the used strategy". Could you explain this in more detail please. > R3. I found that assumption C.2 is included in the appendix. I suggest the author claim this reply in the revision to avoid misunderstanding. If our understanding is correct, the reviewer is suggesting that we include assumption C.2 and the remark in lines 652-653 regarding "non-convex landscape on the parameter space" in the main paper. We agree with this point and we are happy to include this in the revision. > I did not find the definition of $l_n(t)$. Is it equal to $l_n(\theta(t))$. The reviewer is correct that $l_n(t) = l_n(\theta(t))$. $l_n(t)$ represents the $n$-th element of vector $\mathbf{l}(t)$ (i.e. the $n$-th per-sample loss at time $t$), which is first presented in Eqn (12) when describing the iEF update in the continuous time framework. > So $l^\star_n$ is related to the optimal model parameter $\theta^\star$? $l^\star_n$ (defined in lines 649-650) represents the lowest possible loss for the $n$-th sample, and is indeed related to the optimal model parameter $\theta^\star$. Near the global minimum, the optimal model parameter $\theta^\star$ is approached when every sample approaches its corresponding optimal loss, i.e. $\forall n, l_n \to l^\star_n$. > When we try to provide a convergent result, we want to reach the global optimal or the stationary point w.r.t. to the model parameter. Is Theorem 5.4. related to this? Yes, Theorem 5.4 shows that with the full-batch iEF method, for every sample in the training batch, the per-sample loss $l_n(t)$ will approach the respective optimal per-sample loss $l_n^\star$ at (at least) a linear rate. This then means the model approaches the optimal parameter $\theta^\star$ at a linear rate (as is explained in lines 657-659). > Moreover, is there any connection between strongly convex objective function w.r.t model output and model parameter? If our understanding is correct, the reviewer is asking whether the assumption of a strongly convex objective function w.r.t. model output $l_n(z_n)$ involves any further assumption on the parameterisation of the model $\theta$. The answer is that we do not make any further assumptions regarding the model parameterisation when making the "strongly convex objective function" assumption. --- Rebuttal Comment 2.1: Comment: Thanks for your reply. As for my questions about the convergent result. Although this method does not need SC w,r,t $\theta$. I believe other assumptions, such as 5.1, can connect the properties in $z$-space to model parameter space. So finally the authors can draw a convergent result w.r.t. $\theta$. (Since SC can lead to PL. why the authors assume SC in Assumption C.2, but only use PL?). It should be quite hard to evaluate whether this method can achieve the global optimal numerically with an objective function, non-convex w.r.t $\theta$ with multiple local optimal and SC w.r.t $z$. Overall, I think my questions have been well-addressed. Since I am unfamiliar with some pieces of related work. I will keep my positive score. --- Reply to Comment 2.1.1: Comment: # Further Response to Reviewer 4Ck3 Thank you for your prompt follow-up! > I believe other assumptions, such as 5.1, can connect the properties in $z$-space to model parameter space. So finally the authors can draw a convergent result w.r.t. $\theta$. We agree with this statement from the reviewer. > why the authors assume SC in Assumption C.2, but only use PL. The reason we focuses on "strongly convex objective function" is two-fold: 1. Convergence results for mean-square-error and general strongly convex objective function is provided in [1] for scalar-output model, and Theorem 5.4 with Assumption C.2 is a direct extension to their result. 2. Strongly convex objective function covers a wide range of important objective function in deep learning setups (in addition to CE+Softmax). We consider a convergence result for such a family of objective function is important. However, as is implied by the reviewer, it is indeed possible to extend our analysis to other objective functions that satisfies PL (in addition to "strongly convex functions") in the future work. > It should be quite hard to evaluate whether this method can achieve the global optimal numerically... We agree that the global convergence in practice requires further validation. However, we believe that our assumptions on model structure (Assumption 5.1 and 5.2) are practical for a highly over-parameterised model, and are fairly common for similar convergence analysis in the literature such as [1]. [1] G. Zhang et al (2019). “Fast Convergence of Natural Gradient Descent for Over-Parameterized Neural Networks”. > Overall, I think my questions have been well-addressed. Since I am unfamiliar with some pieces of related work. I will keep my positive score. We are glad you feel that your questions on our paper have been well-addressed. Lastly, given the overall assessment score from the reviewer, we would like to ask if the reviewer has further concerns with the paper that need to be addressed.
Summary: The paper analyses the Empirical-Fisher-preconditioned gradient and highlights how its components on the per-sample gradients are biases towards well-trained samples, thus leading to potentially unstable training trajectories. The authors propose to solve this issue by scaling the per-sample components accordingly, which has negligible extra cost. Theoretically, they prove convergence guarantees under the full-rank NTK assumption. Empirically, they show that iEF is better aligned with natural gradient than standard EF, and beats some baselines in optimization performance. Strengths: The paper is very well written and structured. Notation and setting are gently introduced. The problem with EF preconditioning is clarified in great extent formally, and also helped with a nice visualization. The proposed solution is well supported: -the negligible extra cost is explained -theoretical guarantees are carried out, under some (very common in these global convergence results) assumptions -experiments are done by both measuring alignment (in a clever scalable way) with natural gradient, both Weaknesses: Regression or Classification? Section 3 begins saying that the paper focuses on classification, which at first allows the authors to define Fisher matrix discarding integrals in favour of a finite sum notation. Then the visual illustration (Sec 4.2 and App B) is on regression problem where, conveniently, NDG=iEF. It would be more fair to have a toy 2D example where NDG and iEF are not equal. Can you provide it? Moreover, Assumptions C.2 and consequently Theorem 5.4 does *not* hold for classification setting (right?). If that's the case you should definitely make it clear. And don't get me wrong, the fact that for scalar-output-regression your proposed iEF is exact natural gradient is great, definitely worth being highlighted. But the way these results are presented is misleading. TYPOS: \ -Line 103 "Jacobian" has an extra "c" \ -Line 128 shouldn't "loss" be "loss change"? \ -Line 153 is "GDN" a typo for "NGD" or am I missing something? \ -Line 270 is "iEF" a typo for "EF"? Technical Quality: 3 Clarity: 4 Questions for Authors: In line 160 I don't totally get the meaning of "is now aligned with the convergence level of each sample", what do you formally mean with the word "aligned"? And in which sense does this "resolve" the inversely-scaled projection issue? Lines 130-131: "has full row rank, and both $\lambda$ and $\eta$ are infinitesimally small", aren't this essentially Assumption 5.1 ($\lambda=0$ and $\eta\rightarrow 0$) and Assumption 5.2 (jacobian full-row-rank)? Is it correct that the reason that limits iEF to scale to big models is the same reason that limits EF (i.e. the matrix inversion part)? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: As mentioned before, there are some limitations with classification/regression, which are not well clarified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Author Response to Reviewer 13ZJ Thank you for your review! >Regression or Classification? Section 3 begins saying that the paper focuses on classification...Then the visual illustration (Sec 4.2 and App B) is on regression problem...It would be more fair to have a toy 2D example where NDG and iEF are not equal... We realise a regression visualisation could be confusing as the paper mainly focuses on classification, but we still decided to use the regression setup (Sec 4.2 and App B) in Figure 1 for visualisation for the following reasons: 1. The least-squares regression problem is commonly used when analysing NGD and EF in the literature [1, 2]. Particularly, our visualisation follows a similar setup to [2], which is an important related work regarding the limitations of EF. Overall, the toy regression example allows us to be consistent with the literature. 2. The least-squares regression problem have several ***nice properties*** (consider only two datum). There exists ***a unique*** global minimum; the NG update can find the global minimum in one step, and the advantage over all other updates ***is self-evident***; the iEF update and NG update ***has the same form*** (as pointed out in the paper and by the reviewer); the distortion of the EF update ***is extreme***. Note that none of these ***properties*** hold for a classification setup, which may make the visualisation less straightforward to understand. However, as noted by the reviewer, given the scope of this paper, it is important to include a toy 2D visualisation for a classification setup and this is provided in ***Figure 1 of the global response document***. It again demonstrates the distortion effect of EF updates, and the similarity between the iEF and NG updates. Please refer to ***AE1 of Global Author Response*** for details. In the final version of the paper, we will clarify the justifcation of the regression visualisation, and we can include this new classification visualisation if space permits. [1] Valentin Thomas et al. (2019). “On the interplay between noise and curvature and its effect on optimization and generalization” [2] Frederik Kunstner, Lukas Balles, and Philipp Hennig. (2019). “Limitations of the Empirical Fisher Approximation for Natural Gradient Descent”. >Moreover, Assumptions C.2 and consequently Theorem 5.4 does not hold for classification setting (right?). If that's the case you should definitely make it clear. The global convergence analysis for classification setup (which is more important for the scope of this paper) is given in Theorem 5.3. We proved Theorem 5.4 mainly to extend the iEF global convergence guarantee to more general setups. The reviewer is correct that Assumption C.2 and Theorem 5.4 does not hold for classification, because the softmax + CE loss function typically used in classification problems is not strongly convex. We will emphasize this point in the final paper. > Typos... Thanks for pointing out these typos. We will fix them in the final version of the paper. > In line 160 I don't totally get the meaning of "is now aligned with the convergence level of each sample", what do you formally mean with the word "aligned"? And in which sense does this "resolve" the inversely-scaled projection issue? We note that how iEF resolves the inversely-scaled projection issue of EF could be better explained and we provide an alternative clearer explanation here. The inversely-scaled projection issue of EF is mainly caused by its per-sample loss reduction $\Delta \mathbf{l_\text{EF}} = -\eta\mathbf{1}$ (see equation below line 129), which blindly enforces an equal loss reduction for all training samples without considering their individual level of convergence. This causes EF updates to overly focus on converged samples (with small $||\nabla_\theta l_n||$) while ignoring untrained samples (with large $||\nabla_\theta l_n||$), as is shown by the projection term $\kappa_\text{EF}$ in Eqn (6). To resolve this issue, we improve the per-sample loss reduction behaviour of the EF update to account for the individual convergence level of each sample. The per-sample loss reduction behaviour of iEF is designed to be the same as a GN update: $\Delta \mathbf{l_\text{iEF}} = -\eta||\nabla_{z_n}l_n||^2$ as is shown in Eqn (11). This allows iEF updates to be "curvature-aware", and focus on reducing the loss of untrained samples (with large $||\nabla_{z_n}l_n||$) while be conservative to converged samples (with small $||\nabla_{z_n}l_n||$). Note that $||\nabla_{z_n}l_n||$ is believed to be a good indicator of the convergence level of a sample because the objective function $l_n(z_n)$ is convex, and $||\nabla_{z_n}l_n||$ in general becomes smaller as the sample approaches its minimum. We feel it would be easier for readers to understand the motivation of iEF if we use Eqn (11) (the per-sample loss reduction) for motivation instead of Eqn (7) (the per-sample projection), and we are happy to make such edits in the final paper. > Lines 130-131: "has full row rank, and both $\lambda$ and $\eta$ are infinitesimally small", aren't this essentially Assumption 5.1 and Assumption 5.2? The reviewer is correct. We will add a reference to Assumption 5.1/5.2 on line 130-131 in the final paper. > Is it correct that the reason that limits iEF to scale to big models is the same reason that limits EF (i.e. the matrix inversion part)? The reviewer is correct that the practical limitations of exact EF/iEF are identical (given their similar generation process in Algorithm 1/2). However, the main bottleneck of applying iEF/EF to big models is the memory complexity of storing the per-sample gradient $O(MP)$ ($M$ being the batch size and $P$ being the trainable parameter size), instead of the "matrix inversion" (for a Gram matrix of size $M\times M$), which is manageable as long as $M$ is not too large. A detailed discussion is provided in the "Time and Memory Complexity" paragraph of Appendix D.1. --- Rebuttal Comment 1.1: Title: Keep score Comment: Thanks for the clarifications and thanks for the toy classification visualization, great job! I keep my score and confirm my willing for acceptance. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you very much for your positive review of our paper!
Summary: Many approaches have been proposed to approximate natural gradient descent, however most of them rely on estimating the fisher matrix with empirical fisher. The estimation is known to have the inversely-scaled projection issue, where the update is inversely proportional to per-sample gradient norm, as such samples that are not well-learnt would have a small update. In addition, as the covariance of the gradient is not invertible, a damping factor is required to ensure invertibility, and this parameter could be hard to tune in practice. To resolve this issue, improved empirical fisher (iEF) is proposed, which multiplies the per-sample gradient by the norm of its likelihood's gradient with respect to the logits, as such all samples would have same scale of updates. The authors then empirically shows that iEF is a better approximation to natural gradient descent (using a novel evaluation metric) compared with standard empirical fisher and shows better performance then AdamW on parameter efficient fine-tuning tasks. Strengths: -The proposed method is well motivated and supported by convergence guarantee. - The proposed indicator for evaluating natural gradient approximation quality is very interesting and useful. - The empirical experiments verify the usefulness of the proposed indicator and the proposed iEF method. - The experiment settings considered are very up-to-date and practical: Parameter efficient fine-tuning for large model. Weaknesses: - The evaluation only contains fine-tuning experiments with parameter efficient fine-tuning techniques, it would be nice to have some more train-from-scratch. In addition, the workloads considered are of rather small scale in terms of tunable parameters, e.g. ResNet-18 has 11M parameters, while the largest tasks in the submission, fine-tuning T5 on GLUE, only contains 0.8M parameters. - The experiments only consider a small number of training iterations (which is common for fine-tuning setting), but it would still be nice to see the behavior of the proposed method in longer training run. - It would be better if the authors could cite [1] for Fig. 1. - It would be nice to see experiments for iEF + KFAC (as is discussed in line 722). [1] Limitations of the Empirical Fisher Approximation for Natural Gradient Descent Technical Quality: 3 Clarity: 3 Questions for Authors: - How does the method compare with Gauss-Newton? Can GN be considered as a baseline approach? - Why does the approximation quality goes worth as optimization proceeds? (first three subfigure in Fig. 2) - Why is the relationship between approximation quality and damping factor non monotonic? - How does iEF compare with AdamW in terms of computational cost? - In the appendix, line 960 says sgd's learning rate is searched between {0.1, ... ,100}, 100 seems to be a pretty larger number? Is that a typo? - I wonder if the authors could provide learning rate v.s. metrics plot to better convince readers that the learning rate is well tuned. - The baseline AdamW uses weight decay but iEF does not incorporate weight decay, it would be nice to see results of AdamW without weight decay. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The proposed method could potentially be more expensive than standard optimizers as Eq. 8 would require per-sample gradient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Author Response to Reviewer Sh4Z Thank you for your review! >The evaluation only contains fine-tuning experiments with parameter efficient fine-tuning techniques, it would be nice to have some more train-from-scratch...small...tunable parameters...small number of training iterations. We agree that a train-from-scratch setup with a large number of training epochs and with a model with a large number of parameters can further support the claims of our paper. We have conducted an experiment for CIFAR10 + large MLP (~10M parameters), which further validates the claims in the main paper. Please refer to ***AE2 of the Global Author Response*** for detailed experiment setup and results. > It would be better if the authors could cite [1] for Fig. 1. We have cited [1] in Appendix B, which is referenced in the caption for Fig. 1. We will add a citation to [1] directly in the caption for Fig. 1 in the final version of the paper. [1] Frederik Kunstner, Lukas Balles, and Philipp Hennig. (2019). “Limitations of the Empirical Fisher Approximation for Natural Gradient Descent”. > It would be nice to see experiments for iEF + KFAC (as is discussed in line 722). We have conducted a preliminary evaluation (using our empirical evaluation framework) for block-diagonal versions of EF, iEF and SF on a selection of PEFT tasks. We found that KFAC with iEF achieves the most consistently good approximation quality to NG updates as compared to standard KFAC and KFAC with EF. Please refer to ***AE3 of Global Author Response*** for the detailed experiment setup and results. > How does the method compare with Gauss-Newton? Can GN be considered as a baseline approach? The GN algorithm is as expensive to implement as the NGD method, which cannot be easily implemented as a baseline optimiser for large setups. However, it is possible to extend the evaluation framework (by replacing the Fisher matrix $\mathbf{F}$ with the GN matrix $\hat{\mathbf{G}}$ in Eqn (16)) to evaluate the update approximation quality to an exact GN update. > Why does the approximation quality goes worth as optimization proceeds? (first three subfigure in Fig. 2) The datapoints reported in the first 3 subfigures of Fig 2 are relative indicator values w.r.t. SGD, i.e. $\frac{\gamma_\text{update}}{\gamma_\text{SGD}}$. Hence, the overall increasing trend only means that the relative improvement of the approximation quality to the NG update w.r.t. SGD diminishes as training progresses, and it does not necessarily mean that the approximation quality becomes worse. The cause requires further investigation, and is likely dependent on the model structure and task of the setup. > Why is the relationship between approximation quality and damping factor non monotonic? The relationship between damping and approximation quality for iEF is overall monotonic, but it is non-monotonic for both SF and EF approximations (see Figs 3/6/7/8/9). Although there is no guarantee that the relationship between damping and approximation quality should be monotonic, the fact that iEF has an overall monotonic relationship while SF and EF do not indicates that iEF is much more well-behaved and less sensitive to damping tuning, which is a practical advantage of iEF. > How does iEF compare with AdamW in terms of computational cost?...The proposed method could potentially be more expensive than standard optimizers as Eq. 8 would require per-sample gradient. The analysis of time and memory complexity of the iEF method is provided in Appendix D.1. The per-sample gradient theoretically can be obtained when computing the batch gradient, and incurs no additional computational cost. The main additional cost comes from the computation of the Gram matrix: $O(M^2P)$, $M$ being the batch size and $P$ being the parameter size, which is relatively small as compared to a back-propagation (assume $M$ is not too large). However, currently our implementation of per-sample gradient is achieved with backward hooks (in Pytorch), which roughly doubles the cost of the standard back-propagation process. However, we would like to emphasize that the exact iEF method implemented in our paper is mainly for theoretical accuracy during experimental comparison. Various mature approximation techniques could be used to accelerate practical iEF-based optimisers (as suggested in Appendix D.2). > In the appendix, line 960 says sgd's learning rate is searched between {0.1, ... ,100}, 100 seems to be a pretty larger number? Is that a typo? I wonder if the authors could provide learning rate v.s. metrics plot to better convince readers that the learning rate is well tuned. The SGD learning rate search range for LoRA + GLUE setups was indeed {0.1, 1, 10, 20, 50, 100} (identical range for PT). Such a large learning rate is searched mainly because in PT setups, which we carried out first, the gradient norm is significantly smaller than for other setups such as LoRA. Hence, a large learning rate is necessary for the model to converge effectively. We provide a table of the average validation accuracy (1 seed) vs. SGD learning rate below for the LoRA + GLUE setup to show that our learning rate is well tuned (we also searched an additional 0.01). The validation accruacy is averaged across 5 selected tasks (CoLA, SST-2, MRPC, QQP, RTE). |lr|Avg Val Acc.| |-|-| |0.01|80.6| |0.1|86.9| |1|64.1| |10|60.9| > The baseline AdamW uses weight decay but iEF does not incorporate weight decay, it would be nice to see results of AdamW without weight decay. We re-ran the AdamW + LoRA + GLUE experiments without weight decay. The averaged validation accuracy (3 seeds) for 5 selected tasks (CoLA, SST-2, MRPC, QQP, RTE) are as follows: |Method|Avg Val Acc.| |-|-| |AdamW|88.0| |AdamW w/o wd|87.8| |iEF|87.7| AdamW without weight decay overall shows only slightly worse generalisation, but indeed improves the relative performance of iEF. We are happy to include these results in the final paper. --- Rebuttal 2: Comment: Dear Reviewer Sh4Z, For the reviewer's information, we have by now provided additional experimental results for another train-from-scratch setup: CIFAR10 + ResNet32 to further support the claims in the main paper. Details are provided in Section ***AE4: Train-from-scratch Experiment for CNN*** in the Official Comment to Global Author Response. Note that iEF achieved the best test accuracy of 85.6% for CIFAR10 in this setup, which makes it a more pratically useable setup than the MLP setup in ***AE2*** (best test accuracy of 58.6%). We believe this additional CNN-based experiment (***AE4***), together with the large MLP experiment (***AE2***), have better addressed your concerns regarding the lack of a large train-from-scratch experiment in our submission. Best regards, Paper 7035 Authors
Rebuttal 1: Rebuttal: # Global Author Response Thank you all for your positive reviews! We have attached to this global response a pdf document, which contains information for additional experiments (***AE***) (4 figures with captions) that address concerns regarding our paper. This document is referred to as the "Global Response Document" in our seperate author response to each reviewer. A summary of the contents of this pdf is provided as follows. ### AE1: Logistic Regression Classification To address the suggestions of reviewer 13ZJ regarding the need for a classification based visualisation for the iEF method (in addition to the least-squares regression visualisation in Figure 1 of the main paper), we have provided a 2D visualisation for a toy logistic regression problem in Figure 1 of the global response document (following the style of Figure 1 in the main paper). Detailed problem setups are provided in the caption, and update generation of SGD/EF/iEF/NGD follows that of Appendix B in the main paper. The visualisation further validates the distortion of EF updates in logistic regression problem, and also demonstrates that iEF indeed resolves the inversely-scaled projection issue and achieves high resemblance to NG updates. ### AE2: Train-from-scratch Experiment To address the suggestions of reviewer Sh4Z and Khnn regarding the need for an additional experiments on a classical train-from-scratch image classification task with a larger model and more training iterations, we have provided a full set of experimental results for a CIFAR10 + MLP setup (Figure 2, 3 in the global response document, corresponding to E1, E2, E3 in the main paper for PEFT setups). The 3-layer ReLU-activated MLP model has a parameter size of 10,510,346 (~10M), which takes in a flattened 3x32x32 image, and has two 2048 hidden layers (developed based on the setups described in [1, 2]). 1. An MLP model is used instead of a ResNet model due to the following reasons: **1)** It is straightforward to extend our current per-sample gradient implementation from LoRA setup to MLP model, both of which involve only Linear modules (in Pytorch); **2)** Reviewer Sh4Z and Khnn suggested using a ResNet of ~10M parameters. However, for CIFAR setups, 500 residual blocks are needed to reach this parameter size. While ResNet18 for ImageNet indeed has ~10M parameters, the training set of ImageNet is too large to complete during the limited rebuttal time [3]. Both options are difficult to run given the time constraints; **3)** We believe the current setup is sufficient to provide insight into the behaviours of EF/SF/iEF in a larger train-from-scratch setup. 2. During optimisation, no weight decay or dropout is applied. The Adam, SGD, EF, SF and iEF optimisers are used, and for all optimisers, 60 epochs are run with a batch size of 64. The learning rate 1e-4 of Adam is searched from {5e-5, 1e-4, 5e-4}. The learning rate 0.1 of the SGD is searched from {0.01, 0.1, 0.5}. The learning rate 50 of iEF is searched from {10, 50, 100}. The learning rate 0.1 of SF is searched from {0.01, 0.1, 0.5}. The learning rate 1e-4 of EF is searched from {1e-5, 1e-4, 1e-3, 0.01, 0.1}. Normalised update with a linear scheduler is used for EF and SF as in the paper. A constant learning rate is used for iEF. A multi-step scheduler (0.1 decay at fixed epoch number 15 [3]) is used for Adam and SGD. The training loss and validation accuracy curves are plotted in Fig. 2 of the global response document, error bar computed with 3-seeded runs. The final results of the experiments are as follows: | Method | Val Acc. | Test Acc. | | - | - | - | | iEF | $58.8\pm0.87$ | $58.6$ | | Adam | $56.3\pm0.22$ | $56.6$ | | SGD | $54.3\pm0.66$ | $54.4$ | | SF | $54.8\pm0.08$ | $55.2$ | | EF | $28.2\pm1.43$ | $29.2$ | Overall, we can observe that EF < SF $\approx$ SGD < Adam < iEF in terms of both generalisation and convergence, which is aligned with the claims in the main paper. 3. The approximation quality w.r.t. damping factor result is presented in Fig. 3 of the global response document, which shows consistent results with conclusions drawn in experiment E1, E3 of the main paper. Note that due to space limits, we are unable to provide a plot for experiment E1 (equivalent to Fig. 2 in the main paper) [1] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, Nathan Srebro. (2018). Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks. [2] Mahesh Chandra Mukkamala, Matthias Hein. (2017). Variants of RMSProp and Adagrad with Logarithmic Regret Bounds. [3] Kaiming H, Xiangyu Zhang, Shaoqing Ren, Jian Sun. (2015). Deep Residual Learning for Image Recognition. ### AE3: KFAC + iEF Experiment As is requested by Reviewer Sh4Z, we have conducted a preliminary evaluation for the concept of combining KFAC with iEF (as discussed in Appendix D.2 of the main paper). The empirical evaluation framework proposed in the main paper is used to evaluate the approximation quality of 3 additional block-diagonal methods: KFAC, eKFAC and ieKFAC. KFAC stands for standard KFAC with SF (with 1 MC sample per-datum). eKFAC stands for KFAC with EF (see Eqn (43)) and ieKFAC stands for KFAC with iEF (see Eqn (44)). All methods use a damping factor of 1e-7 as in experiment E1 of the main paper. The evaluation approximation quality w.r.t. training progress for 3 selected tasks: QNLI+T5+LoRA, RTE+T5+LoRA, MRPC+T5+LoRA are shown in Figure 4 of the global response document. It is demonstrated that ieKFAC achieves the most consistent improvement of the approximation quality to NG updates, as compared to KFAC and eKFAC. This indicates the potential of the developement of ieKFAC in future work. ### Further Additional Experiments There are other additional experiments that cannot fit into the one-page global response document. We will provided these results in our separate individual author responses. Pdf: /pdf/b3b8a0518ac5e99f99be52dc503d646752106e60.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
Accept (spotlight)
Summary: This paper introduces the concept of learnable semi-structured sparsity (N:M sparsity). This extends to practical N:M sparsity to pretrained LLMs and makes the masks learnable. the paper also make the transfer of masks from other sparsity techniques to work for N:M. Finally, the general N:M masks learned can be easily fine-tuned for each task with great performance. the paper also shows extensive evaluations on various practical LLMs with the SOTA performance compared to other sparse LLM methods. ---------------------- The review will be short and does not reflect the time put in for the review or the quality of the paper. When the ideas are simple and clear -- I tend to write shorter reviews to the point. Strengths: I really enjoyed reading this paper. It was a very practical paper on many levels I have read on sparsity and applicability to LLMs. The idea of differentiable masks is not new -- as pointed by the authors -- in CS (Savarese et al., 2019), STR (Kusupati et al., 2020) along with other methods mentioned in the paper. Same goes with N:M. However, bringing them together (again done earlier as mentioned by the paper) and combining them with pretrained LLMs makes is very useful in practice. I really like 2 other aspects apart from the base idea and practicality. THe transfer of one-shot pruning masks to N:M as scaffolds and then using learned N:M masks for downstream tasks was strong. The results are pretty solid as well. I also like the downstream evals and not just having perplexity values. the findings of the paper highlighted are very useful in further research. I am strongly in support of the paper unless I missed something obvious. I appreciate the author on the comprehensive paper. Weaknesses: I do not find any glaring weaknesses concerns or questions about the current version of the paper. However, I might have missed something and would rely on the other reviewers if I was wrong. My only suggestion for the paper is to add a more comprehensive related work section to complete the paper. Maybe introducing learning sparsity be it mask learning with STE or STR would be a nice thing to add. Technical Quality: 4 Clarity: 4 Questions for Authors: See above. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the invaluable suggestions and the positive comments! We will polish our draft with the following new results and make our code and learned masks public for better reproducibility. > **Q1: My only suggestion for the paper is to add a more comprehensive related work section to complete the paper. Maybe introducing learning sparsity be it mask learning with STE or STR would be a nice thing to add.** **A1:** Thanks for the suggestion. We provide new results in the following table by re-implementing the SR-STE method [1]. Additionally, we compare MaskLLM to several fine-tuning methods [2,3] from existing works, including simple fine-tuning and PEFT. This table indicates that MaskLLM, even without any weight updates, can achieve competitive results (PPL=6.72) compared to SR-STE (PPL=6.80). This result is expected, as MaskLLM can explore all mask candidates, whereas SR-STE primarily focuses on weights with large magnitudes. Additionally, incorporating sparse fine-tuning into MaskLLM can significantly improve both perplexity (PPL) and zero-shot accuracy on HellaSwag. We will polish the related work section and the appendix to include those baselines, and will also make our code and learned masks public for better reproducibility. |Method|Weight Update|LLM|Dense PPL↓|Sparse PPL↓|Δ PPL ↓|HellaSwag ↑| |:-|:-:|:-:|:-:|:-:|:-:|:-:| |Wanda + Sparse Finetuning [1]|✔️|Llama-1 7B|5.68|7.02|+1.33|n/a| |Wanda + LoRA [1]|✔️|Llama-1 7B|5.68|8.24|+2.56|n/a| |SR-STE [2]|✔️|Llama-2 7B|5.12|6.80|+1.68|51.26| |SparseGPT + SPP [3]|✔️|Llama-2 7B|5.12|n/a|n/a|51.33| |Wanda + SPP [3]|✔️|Llama-2 7B|5.12|n/a|n/a|50.61| |**MaskLLM + Sparse Finetuning**|✔️|**Llama-2 7B**|**5.12**|**5.83**|**+0.71**|**54.66**| |**MaskLLM**|-|**Llama-2 7B**|**5.12**|**6.72**|**+1.60**|**50.91**| *Table 1: Comparison to more finetuning-based baselines.* [1] Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch [2] A Simple and Effective Pruning Approach for Large Language Models [3] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. After looking at other reviews as well, I am in support of accepting this paper. Please ensure to update the related work section and incorporate the new experiments presented here in the final paper. --- Reply to Comment 1.1.1: Comment: Thank you for the very positive comments and invaluable suggestions! We will incorporate all the mentioned results into the draft as suggested. Best regards, Authors of #1520
Summary: This paper proposes MaskLLM, a learnable method to craft semi-structured sparsity in Large Language Models (LLMs). The approach involves modeling N:M masks with a categorical distribution, which is can be optimized through gumbel softmax. The key findings in this paper suggest that end-to-end learning of mask parameters can learn more effective and accurate sparsity patterns compared to one-shot methods. Additionally, the porposed MaskLLM also supports transfer learning for downstream tasks, where lossless masks can be learned for deployment. Strengths: 1. The ideas of learnable masks and transfer learning are innovative for Sparse LLMs. The proposed method enables task-oriented compression for downstream applications without necessitating the re-training of LLM weights, making it practical for real-world applications. 2. Results on several LLMs are positive. Table 1 demonstrates that the learnable mask method can achieve superior performance compared to state-of-the-art methods, while keeping the LLM weights frozen throughout the learning process. This indicates significant potential for further improvements in N:M sparsity within LLMs. 3. The results in Table 4 are interesting. They show that the learnable mask does not require so many training steps and samples to outperform the oneshot baseline. And the proposed method is more scalable with large-scale datasets. 4. The key ideas and findings in this paper are clear and well-organized. It is easy to understand the key messages in different experiments. Weaknesses: 1. One of my concerns is the use of non-public datasets and LLMs in this study. It would be beneficial for the paper to include more results using publicly available data and open-source models, such as LLaMA-3, to enhance the reproducibility and applicability of the findings. 2. This is a question about the storage cost in Table 6: Could the author clarify the actual storage costs, such as the file size on disk? Additionally, what is the data format used during the fine-tuning process? 3. Table 2 highlights the significance of the prior in mask selection. Does this suggest that the proposed method is only trying to fine-tune the prior mask rather than learn something new? What happens if we don’t have SparseGPT prior? Is the learned mask very similar to its prior? Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to "Weaknesses". Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the invaluable comments and questions! > **Q1: One of my concerns is the use of non-public datasets and LLMs in this study. It would be beneficial for the paper to include more results using publicly available data and open-source models, such as LLaMA-3, to enhance the reproducibility and applicability of the findings.** **A1:** Thank you for the suggestions. We provide additional results for the public C4 dataset and Llama-3 7B. **1) The C4 dataset:** The following table demonstrates that our method achieves comparable results across different datasets. However, the blended dataset yielded slightly better results due to its inclusion of more diverse domains, such as programming. |Method|Blended Data|C4| |-|:-:|:-:| |Llama-2 7B|5.12|5.12| |SparseGPT|9.88|10.42| |Wanda|11.25|10.29| |MaskLLM|**6.72**|**6.85**| *Table 1: Wikitext-2 PPL of 2:4 LLaMA-2 7B pruned with different datasets.* **2) Llama-3 8B**: In addition, we also provide new results for Llama-3 8B only using the public C4 dataset, using the same training protocol and hyperparameters as Llama-2 7B for mask learning. Our method continues to achieve superior performance on the latest Llama-3 model. We will include all these new results in the appendix. | Method | Weight Update | Wikitext-2 PPL | |----|:-:|:-:| | Llama-3 8B Dense | - | 5.76 | | Magnitude | - | 2.61E+03 | | SparseGPT | ✔️ | 17.64 | | Wanda | - | 23.40 | | MaskLLM | - | **8.50** | *Table 2: Wikitext-2 PPL of 2:4 LLaMA-3 8B, with a sequence length of 4096. We took the SparseGPT mask as the prior and only used the public C4 dataset for this experiment.* > **Q2: This is a question about the storage cost in Table 6: Could the author clarify the actual storage costs, such as the file size on disk? Additionally, what is the data format used during the fine-tuning process?** **A2:** Regarding the Llama-3 model, the actual storage cost for the binary mask is 564MB, which is compressed using the lossless compression method -- ``np.savez_compressed``. In contrast, the original model requires 15GB for storage. For five downstream tasks, using SparseGPT would require 5x (15GB/2 (for 50% zeros) + 564MB) on disk, but MaskLLM would require only 15GB + 5x 564MB. The training is conducted with BF16 precision. > **Q3: Table 2 highlights the significance of the prior in mask selection. Does this suggest that the proposed method is only trying to fine-tune the prior mask rather than learn something new? What happens if we don’t have SparseGPT prior? Is the learned mask very similar to its prior?** **A3:** Thank you for the question. A mask prior can be a useful jump-start, helping to reduce the number of samples needed to achieve good quality. MaskLLM indeed inherits some masks from the prior and refines them to enhance performance. Figure 8 in the appendix illustrates the mask differences. A small difference is observed between magnitude pruning and other one-shot methods like SparseGPT and Wanda. However, after training, the mask difference between MaskLLM and one-shot methods can be larger than the differences among the one-shot methods themselves. Regarding the availability of different priors, we believe that the magnitude prior is always accessible for mask learning, showing comparable or even superior performance to other priors like SparseGPT or Wanda. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I have read through the response from the authors. The authors provided sufficient experiment results and statistics in the rebuttal to support their claim. My three concerns are adequately addressed. I have also read through other reviews that proves the work is rather solid. I will raise my score. --- Reply to Comment 1.1.1: Comment: Thanks so much for the encouraging feedback! We will keep refining the draft with all the additional experiments. Best regards, Authors
Summary: The paper introduces MaskLLM, a method for introducing semi-structured sparsity (N:M mask patterns) in LLMs. The authors show that existing model pruning methods such as SparseGPT result in significant loss in model quality at smaller scales (800M ~ 15B parameters) when using semi-structured methods. They formulate the problem of find a good N:M pattern as a mask selection problem from a candidate set of mask $S$. They formulate this further as a sampling problem, whereby they sample masks for all $\mathcal{W}^{1\times4}$ parameters in a layer to measure model quality - since exact mask computation is an intractable problem for large models. They propose using Gumbel-softmax sampling to figure out soft / differentiable masks that can learned through a training process, to optimize the sampling problem. The authors further find that learning of the masks by their proposed method can result in vanishing gradients throughout the network, and enable a sparse weight regularization to promote higher gradient norms through the network. They further find that using methods such as SparseGPT as mask priors enables more effective sampling (ie, learning) of the final masks. They follow these up with different experiments and ablations to highlight the strengths of their proposed method. Strengths: 1. The paper formulates the selection of N:M masks as an optimization problem, which enables scaling the method to large datasets and models. 2. The paper walks through the math required to understand their optimization formulation step-by-step, making it simple to understand. 3. The method focuses on common problems introduced by pruning (such as vanishing gradients of pruned weights etc.) and proposes ways to resolve them through regularization. 4. For each of the parts of the proposed method, the authors present ablations and results that validate their design choices. 5. The authors also showcase how their method can scale for transfer learning to downstream tasks (both via fine-tuning or via mask-transfer, as proposed via their prior initialized method) 6. The paper shows performance results from using their method (~1.3~1.4x faster) on A6000 GPUs. Weaknesses: 1. The best MaskLLM results presented rely on SparseGPT as a prior for mask intialization and then compare against SparseGPT for efficacy. This seems to be an unfair comparison, since you're using the method's best result and then improving on top of that. For e.g, without prior masks, the method seems to be much closer in performance to the SparseGPT method for the LLaMA-2 7B model (9.12 vs 10.42). It will be good to see how the method performs for say the 13B model without using any priors. 2. The authors show that SparseGPT has fundamental limits for improvement based on the number of samples (which is well documented and understood) - but the comparisons shown, for e.g., in Figure 4 are with different datasets? Did the authors test the collated datasets for the MaskLLM training with SparseGPT and what plateaus were observed there? 4. One inherent limitation to the method seems that there are many hyper-parameters to tune to find good masks ($\alpha$ for SparseGPT prior, $\mathcal{k}$ for logit scaling and $\tau$ for softmax temperature, $\lambda$ for the regularization coefficient). Do these hyper-parameters scale optimally for all model scales? Or is more fine-grained optimization needed as models scale up? - Also for the $\lambda$ parameter, the authors show that using $1e^{-4}$ results in highest gradient norm [Table 12], but the hyper-parameter used in Table 7 is $1e^{-5}$, can the authors clarify this aspect? Technical Quality: 3 Clarity: 2 Questions for Authors: 1. For the LLaMA-2 models, the authors mention using the official dataset from the paper. However, the paper has no mention of any datasets used for training the models. Can the authors clarify this discrepancy? 2. In Table 11, when the authors mention the score from RIA, does that include the channel permutation method enabled in the paper? 3. Can the authors verify their magnitude pruning results for downstream tasks for Llama2-7B (Table 1). For such a large perplexity difference between magnitude pruning and SparseGPT / Wanda - the downstream results seem too be high? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: 1. There are some inconsistencies in the links in the paper (for e.g., Appendix D mentioned in page 8 - for weight regularization maps to Section 4.1 in the paper - please fix this). 2. Throughout the paper, it is unclear what results map to what experiments. For example, for Figure 5, which models were used for the ablation? Same for the results in Tables 3, 5, and 13. Scanning the Appendix also did not help clarify this. It makes the results presented somewhat non-trivial to parse. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the invaluable comments and suggestions about baselines, datasets, and hyperparameters! > **Q1: 1) The comparison between MaskLLM with SparseGPT prior and SparseGPT seems to be an unfair. The method seems to be much closer in performance to the SparseGPT method without prior. 2) It will be good to see how the method performs for say the 13B model without using any priors.** **A1:** Thank you for the insightful comment. One-shot methods like SparseGPT excel at efficiently identifying a coarse mask. In contrast, MaskLLM, a learning-based method, identifies better masks but is also more resource-intensive. Without a prior, 2,000 steps might not be sufficient to learn all the masks. Moreover, using a trivially-computable Magnitude mask as the prior also yields a respectable PPL of 6.77, compared to the 6.72 achieved with a SparseGPT prior. This is likely due to the high similarities of the masks found by Magnitude and SparseGPT (demonstrated in Figure 8 of the appendix). From our perspective, mask prior is an important design in this work and it's always encouraged and feasible to start training from some prior, which can be easily identified with even magnitude pruning. **13B model without using any priors:** Regarding the requested 13B experiment, the training is still ongoing. We will keep working to provide new results; however, due to resource and time constraints during the rebuttal period, we cannot immediately present the training results for the 13B model. > **Q2: The authors show that SparseGPT has fundamental limits for improvement based on the number of samples (which is well documented and understood) - but the comparisons shown, for e.g., in Figure 4 are with different datasets? Did the authors test the collated datasets for the MaskLLM training with SparseGPT and what plateaus were observed there?** **A2:** For the one-shot method, we use the C4 dataset as was suggested in the original papers. For training with MaskLLM, we utilize a blended dataset collected from the Internet, which covers 69 domains such as programming languages as mentioned in Line 430 of the appendix. Following the advice, we provide more results to evaluate the effectiveness of SparseGPT and MaskLLM on different data sources. The following table shows results obtained by **only** using the blended dataset or the public C4 dataset. It can be observed that the results between datasets are comparable and MaskLLM is still better than the baselines. We will include these additional results in the revised version of the paper. |Calibration Data|Blended Data|C4| |-|:-:|:-:| |Llama-2 7B|5.12|5.12| |SparseGPT|9.88|10.42| |Wanda|11.25|10.29| |MaskLLM (2K steps)|**6.72**|**6.85**| *Table 1: Wikitext-2 PPL of 2:4 LLaMA-2 7B pruned with different datasets.* > **Q3: 1) There are many hyper-parameters to tune. Do these hyper-parameters scale optimally for all model scales? Or is more fine-grained optimization needed as models scale up? 2) Why 1e-5 was selected for regularization?** **A3:** **1)** We agree with the reviewer that our method can be customized by a number of hyperparameters. Fortunately, as demonstrated in Table 7, we applied **the same hyperparameters across all models** and obtained consistently positive results. This indicates that the selected hyperparameters are robust and generalizable to different models and datasets. **2)** Regarding regularization, we opted for the relatively smaller regularization of 1e-5 since it produces a slightly better validation loss (1.83 vs. 1.86) during training. An over-large regularization may limit the searching space of mask learning. > **Q4: For the LLaMA-2 models, the authors mention using the official dataset from the paper. However, the paper has no mention of any datasets used for training the models. Can the authors clarify this discrepancy?** **A4:** We do not have access to the official Llama-2 dataset as it is not publicly available. As discussed in Line 431 of the appendix, we collected a blended dataset following similar principles as stated in the Llama paper, encompassing 69 domains such as Programming (C++, C Sharp, Python) and general corpus. We will clarify this point in the revised manuscript. Additionally, we will provide the C4 results mentioned above, and make the code and our learned binary masks public to enhance the reproducibility of our experiments. > **Q5: In Table 11, when the authors mention the score from RIA, does that include the channel permutation method enabled in the paper?** **A5:** Thank you for the comment. The RIA result is collected from the official paper. Channel permutation is disabled for this result. If enabled, the PPL of RIA will be 7.77 compared to our 5.85. We will update Table 11 with this result. > **Q6: Can the authors verify their magnitude pruning results for downstream tasks for Llama2-7B (Table 1). The downstream results seem to be too high?** **A6:** We double-checked these results and found them to be accurate. The same phenomenon has also been reported in the Wanda paper, where the accuracy drop on zero-shot tasks is 8%. > **Q7: There are some inconsistencies in the links in the paper (please fix this).** **A7:** Thank you for the comment. We will fix these links according to the instructions and carefully review other links to ensure their correctness. > **Q8: Throughout the paper, it is unclear what results map to what experiments. For example, for Figure 5, which models were used for the ablation? Same for the results in Tables 3, 5, and 13.** **A8:** Tables 3 and 5 utilize GPT-3 2B for quick experiments, while Table 13 covers all three models: GPT-3B, 843M, and Llama-2 7B. We will clarify the models in the captions following your advice. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response to the reviews and additional experimental results. I understand that the 13B model results will take time, thank your for taking a stab at those. After reading all reviews and responses, I will update my rating to accept (score: 7). Please do incorporate the appropriate fixes / changes in the revised version of the paper. --- Reply to Comment 1.1.1: Comment: Thank you very much for your encouraging feedback! We will ensure that all appropriate fixes and changes are incorporated into the revised version.
Summary: The authors proposed a novel LLM pruning technique, by modeling the distribution of all possible masks and formulate the selection of optimal masks in a differentiable way. Strengths: - Important and relevant problem setup. - The solution is novel. - Thorough evaluation across a range of model/dataset combinations. Weaknesses: - Unclear what the computational cost of this technique is, and how that compares with the alternative/more straightforward technique of sparse pretraining/finetuning. - Unclear why the authors do not report speedup for all models/datasets. - Using perplexity as a proxy for downstream coding task performance is sub-optimal. Using benchmarks such as HumanEval is preferred. Technical Quality: 3 Clarity: 4 Questions for Authors: - While this work is interesting and thoroughly executed already, can you comment on how this compares with more straightforward baseline of sparse finetuning/pretraining? What's the compute cost v.s. accuracy trade-off? - Finding 1. is vague in two ways: 1). unclear what "large scale dataset" mean, 1B tokens is not large scale for LLM, please just state the size of the dataset explicitly. 2). I don't think your experiments are enough to justify "Learnable Sparsity ... fully leverage computational resources to learn precise masks through end-to-end training". Just say your technique better leverages computational resources than prior art, which is what your experiments show. - What's the calibration set size to produce Table 1.? - For Table.2 SparseGPT, did you also used the weight updates in Learned Mask setup? - As for weight magnitude regularization (Finding 4), have you tried tuning the learning rate for downstream finetuning? This sounds like a learning rate issue. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: all addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank reviewer AwSn for the valuable comments. > **Q1: Unclear computational cost. Comparison to sparse pretraining/finetuning.** **A1:** This submission discusses the computational cost of mask learning in Lines 433-435 of the appendix. According to the original tech report of Llama-2 [1], MaskLLM's training cost is 0.6% of pre-training (1,280 GPU hours / 184,320 GPU hours). On a single node with 8xA6000, MaskLLM (239 s/it) is slightly slower than the Straight Through Estimator (218 s/it) due to its additional sampling process. For a quantitative comparison, please refer to our response to Question 4. [1] Llama 2: Open Foundation and Fine-Tuned Chat Models > **Q2: Speedup for all models/datasets.** **A2:** We supplement more benchmark results for Llama-2 13B and Nemotron-15B here: |Model|Input Len.|Output Len.|Dense|2:4|Speed Up| |:-|:-:|:-:|:-:|:-:|:-:| |**Llama-2 13B**| ||128|128|32.97|51.64|1.57×| ||128|2048|31.81|49.00|1.54×| ||2048|128|31.14|47.19|1.55×| ||2048|2048|30.09|45.06|1.50×| |**Nemotron-4 15B**| ||128|128|38.15|59.40|1.56×| ||128|2048|37.51|57.93|1.54×| ||2048|128|37.44|57.56|1.54×| ||2048|2048|36.85|56.37|1.53×| *Table 1: Benchmarking Llama-2 13B and Nemotron-4 15B with A6000 and TensorRT-LLM* The same 2:4 model will exhibit consistent speed-up across different datasets because the acceleration of the 2:4 pattern is determined by the inference engine (TRT-LLM in our case) and the hardware (A6000). > **Q3: PPL is sub-optimal for coding tasks. Using benchmarks such as HumanEval is preferred.** **A3:** Thank you so much for the insightful advice. We chose perplexity (PPL) in our paper as it's simple and general across different tasks or domains. We are still working on more specific evaluation metrics, such as HumanEval as suggested by the reviewer, and will provide additional results in the revised appendix. > **Q4: How this compare with more straightforward baselines of sparse finetuning/pretraining? What's the compute cost v.s. accuracy trade-off?** **A4:** Following the suggestion, we provide more comparison results for different strategies: (1) Mask-only learning with frozen weights, such as the MaskLLM and Wanda, (2) Sparse Fine-tuning of the remaining weights after pruning and (3) Learning both weight and sparsity, such as STE. |Method|Weight Update|LLM|Dense PPL↓|Sparse PPL↓|Δ PPL ↓|HellaSwag ↑| |:-|:-:|:-:|:-:|:-:|:-:|:-:| |Wanda + Sparse Finetuning [1]|✔️|Llama-1 7B|5.68|7.02|+1.33|n/a| |Wanda + LoRA [1]|✔️|Llama-1 7B|5.68|8.24|+2.56|n/a| |SR-STE [2]|✔️|Llama-2 7B|5.12|6.80|+1.68|51.26| |SparseGPT + SPP [3]|✔️|Llama-2 7B|5.12|n/a|n/a|51.33| |Wanda + SPP [3]|✔️|Llama-2 7B|5.12|n/a|n/a|50.61| |**MaskLLM + Sparse Finetuning**|✔️|**Llama-2 7B**|**5.12**|**5.83**|**+0.71**|**54.66**| |**MaskLLM**|-|**Llama-2 7B**|**5.12**|**6.72**|**+1.60**|**50.91**| *Table 2: Comparison to more finetuning-based baselines.* We re-implement SR-STE and collect other results from the related works. Some missing results are marked with "n/a". This table indicates that MaskLLM even without any weight updates, can achieve competitive results (PPl=6.72) compared to SR-STE (PPL=6.80). This is expected as MaskLLM can explore all mask candidates while SR-STE mainly focuses on weights with large magnitudes. Furthermore, incorporating sparse fine-tuning into MaskLLM can significantly improve both perplexity (PPL) and zero-shot accuracy on HellaSwag. **The cost-accuracy trade-off:** A key message in this work is that incorporating more computing can effectively enhance accuracy. One-shot methods are efficient yet not sufficiently accurate. Techniques like LoRA improve accuracy with more training, but they may not fully explore various mask combinations. In contrast, MaskLLM thoroughly examines different masks through the proposed differentiable sampling, incurring a 9% higher training cost than STE (See Q1), but achieving the highest accuracy among these methods. Practically, one-shot methods are preferable if resources are limited. However, if sufficient resources and samples are available for end-to-end training, MaskLLM is a good choice for compressing LLMs. [1] A Simple and Effective Pruning Approach for Large Language Models. [2] Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch. [3] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models. > **Q5: Finding 1. is vague. 1) just state the size of the dataset explicitly. 2) Just say your technique better leverages computational resources than prior art.** **A5:** Following the advice, we will replace "large-scale dataset" with "larger calibration set" and specify its exact size (512k samples with 2.097B tokens) in the paper. Regarding point 2, we agree with the reviewer that the phrase "better leverages" is more appropriate and will revise the submission accordingly. > **Q6: What's the calibration set size to produce Table 1.?**. **A6:** For the baseline methods, we used 128 samples for one-shot pruning following the official implementation. For MaskLLM, we utilized 512k samples. > **Q7: For Table.2 SparseGPT, did you also used the weight updates in Learned Mask setup?**. **A7:** When used as the prior, the weight update in SparseGPT is disabled. > **Q8: As for weight magnitude regularization (Finding 4), have you tried tuning the learning rate for downstream finetuning? This sounds like a learning rate issue.** **A8:** We indeed conducted tuning experiments and still observed this issue with different learning rates. Our hypothesis is that low magnitude might not be a good initialization for downstream training. The validation PPL at 4,000 training steps is shown bellow, where the regularization effectively alleviates this problem. |Lr|Val. PPL @ 4K Steps| |-|:-:| |1e-3|5.91| |5e-4|5.85| |5e-4 + Reg.|**5.62**| |1e-4|5.84| *Table 3: Validation PPL of downstream finetuning* --- Rebuttal Comment 1.1: Title: Acknowledged Comment: Acknowledged and will stick to my accept rating. --- Reply to Comment 1.1.1: Comment: We would like to express our sincere gratitude for the insightful comments! We will improve the quality of our draft following the above suggestions.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their invaluable comments and suggestions. We will make every effort to provide additional results to support our response within this limited rebuttal period. To ensure better reproducibility, we also promise to release our code and learned masks in the future.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Visual-TCAV: Explainability of Image Classification through Concept-based Saliency Maps
Reject
Summary: Saliency methods are among the most popular approaches to explaining an existing black-box image classifier. However, they are limited to localizing class objects in an image. In addition, since they rely on per-pixel importance, they are unable to generalize accross multiple instance to provide a global explanation of the image classifier. To address these limitation, an existing work, Testing with Concept Activation Vectors (TCAV), provides global explanations via learning concept vectors by learning from a set of example images with a known concept. However, TCAV can only provide global explanations, limiting them from providing location information of where the concept is located in the image. Inspired to solve this problem, the authors present Visual-TCAV, an approach that provides both local and global explanations. The authors realize this by learning a Pooled-CAV per concept based on the feature maps of a chosen layer in the network and combining this with the integrated gradients (IG) of the same features maps for a given instance image. The resulting saliency provides a localization of the concept in the instance. To achieve global explanations, they analyze the aggregation of the concept activations across images for a particular class. The authors provide analysis on layer selection, local explanations, and global explanations across several popular CNN-based model pretrained on ImageNet. In addition, they conduct a validation experiment to verify the effectiveness of their method where the ground-truth concept is known. Strengths: The methods is able to add localization to the existing TCAV approach increasing is ability to explain black-box classifier CNNs. The authors address that their approach only considers positive activations in the features and discusses the usefulness of accounting for negatively activated features in the future. The presentation of the paper is clean and easy to follow. The methods are made simple to understand and are effective. Figure 1 was particularly effective at communicating their method. The experimentation and analysis was decently extensive. They analyze the effect of choosing shallow, middle, and deep layers for their approach providing interesting findings on where certain concepts are activated. The validation experiment shows the faithfulness of their approach's ability to find the targeted concept in a set of example images. Qualitative results show strong localization ability of their method to identify queried concepts. Their approach is relatively fast to run for local and global explanations. Weaknesses: The authors show the activation of different types of concepts at different layers. It has been shown that certain levels of layers have been associated with different types of concepts such textures, shapes, objects, etc in [1]. This work should be referenced and discussed compared to their findings on activations at different layers. While the paper analyzes across common CNN models, they do not analyze on ViTs or models trained on other datasets other than ImageNet. The authors utilize generative models to create certain images containing a concept, but do discuss why this was necessary. It is an interesting avenue, but I'm unsure of it's necessity in this work if no further analysis was done on generative images in particular. [1] Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. “Feature Visualization”. In: Distill. 2017. Technical Quality: 3 Clarity: 4 Questions for Authors: Why did you use generative images for the "pews" and "fresco" concepts as opposed to gathering images with these given concepts manually. I understand that generating images may be more accessible, but you would still have to manually confirm that the generated images faithfully contain the targeted concepts. Other than visual verification, is there a more streamlined way to ablate through the layers of a network in picking the most faithful one that identifies the concept? How were the shown concept chosen in Figure 4? Were these the top 3 concepts for that class or were than manually chosen? On lines 229-230, you state, "...the most accurate concepts maps are typically found in slightly earlier layers...". How are you determining 'accurate' concept maps here? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The paper doesn't provide generalization of their method to visual transformer methods which are also becoming popular in explainability and interpretability work. While this approach is effective at identifying concepts, it requires a manually selected set of example images with a predefined concept. While they mention that generative approaches could help with reducing the number of required examples, the choosing of the correct concepts is still a limitation. This is particularly a problem in more specialized domains such as medical diagnosis where concepts may not be known or are more difficult to explain / generate. In addition, one has to ablate through several layers for each concept to find which one effectively captures the concept. The paper only analyzes models pretrained on ImageNet. To convince the generalizability of this approach, analysis on a model trained on other datasets would be necessary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for acknowledging the merits of our work. Below, we comment on the identified weaknesses and questions. W1: We agree that the mentioned paper is highly relevant to our work and should be referenced. Considering our results in comparison to that paper, an interesting point that is that while it’s true that complex features are often not recognized at all early in the network (such as steeple and car in Appendix C), simple features such as the striped or honeycombed textures are indeed recognized by earlier layers. Furthermore, depending on how much they are important for predicting a class, they get propagated all the way to the final layer. On the other hand, less class-discriminative concepts often disappear in the final layers. As mentioned in the future works section, it would be interesting to investigate whether they contribute to the recognition of other concepts. W2/L1: We agree that considering also the explainability of ViTs is indeed important as these models are on par with CNNs in many Image Classification benchmarks. Furthermore, some saliency methods that were first tested on CNNs, such as Grad-CAM and IG, were successfully adapted to be applied on ViTs. Considering this, we have reasons to believe that concept-based explainability, including our method, can also be adapted to ViTs with some slight modifications. Given the relevance of the matter, we’ll include this in the future work section. W3/Q1: We decided to include a few generated concepts to show that it could be a viable alternative to manual selection, possibly encouraging future works in this direction. Despite needing supervision, from a user effort perspective it may be easier to just ask a generative model to produce 100 images and then discard a few artifacts rather than searching 30-50 images online. That said, we agree on the statement that it is was not absolutely necessary to include it in this paper and further analysis should be done, especially on fine-tuning algorithms such as DreamBooth and on ways to automatically discard the artifacts. Q2/L3: This is a very intriguing question as it is something we are currently working on, but it is more challenging than it seems. The natural approach would be to take the layer that was activated the most by the concept. However, this does not guarantee that in that layer the network is recognizing the concept intended by the user. For example, considering the “steeple” concept in Appendix C, layers are somewhat activated by the concept in different ways. Earlier layers seem to focus more on the edges (as the paper you mentioned would have predicted), middle layers are focusing on the entire roof including the steeple, while arguably the final one seems the most faithful for the “steeple” concept. However, the most active is not the most faithful in this case and in many other cases. To obtain the best layer, we would need to automatically determine that the highlighted area best describes the concept inserted by the user, which is a challenging task that could require the use of other black-box models that measure image similarity, which could make the explanations less transparent. Because of this, we decided to avoid the topic in this paper, but we acknowledge that while layer-wise explanations are more interesting for expert users, they may be less “friendly” for non-expert ones and therefore we’ll continue investigating in this direction. Q3: Concepts analyzed in Figure 4 are not the top ones, but they were chosen manually before the experiment. Indeed, many of them do not have a high attribution for the class of interest. We’ll write this more clearly in the paper. Q4: We acknowledge that the word “accurate” is not appropriate in this case. A better word would be “fine-grained” or “detailed” as we are referring to the fact that, in earlier layers, the receptive fields are smaller and the feature maps are larger, so there is more information when visualizing the concept map on the input image, which is lost in deeper layers due to the pooling operations. For example, the size of the concept map for the last layer of the tested networks is 7x7 while an intermediate layer may be 28x28. A heatmap produced by the latter has more detail from a localization perspective than the one produced by the former. L2: We agree that having to manually select 30 or more images can be considered a limitation since there is an effort required from the users, and to address this we are currently working on the automatic generation. However, we respectfully disagree that choosing the concept is a limitation. Users, domain experts especially, often have hypothesis about how the AI system is choosing the class, and manually selecting a concept to test allows them to get an answer to their questions. There are also cases in which domain knowledge is scarce and the automatic concept extraction methods mentioned in the related work section could be more suitable, but this really depends on what the user wants to know about the network. Furthermore, in such cases our method could be also used to investigate more deeply the results of automatic approaches. L4: See answer to Question 1 (Q1) of Reviewer Z23K. --- Rebuttal 2: Title: Rebuttal Response Comment: Thank you for a detailed and well crafted rebuttal. I believe you've addressed my questions, limitations and weaknesses well. While I still think the need to manually select concepts is a limitation (and should be mentioned) and can be challenging, especially where diverse image data is lacking, I understand automatic concept generation / detection is an open are of research. In a line of work where concepts are available, this paper is a good contribution. I've increased my score. --- Rebuttal Comment 2.1: Comment: Thank you for considering our rebuttal and adjusting the score. We acknowledge that manually defining concepts can sometimes be challenging, and we will include this point in the limitations section. Furthermore, we believe that having the possibility to select the concepts can be very valuable in scenarios where concepts are available, even in case we manage to extend our method to also provide automatically extracted concepts in future works.
Summary: This work presents a method for combining TCAV with saliency maps to illustrate where feature-related concepts (e.g., “stripes” or “grass”) are activated in an image. The evaluation is largely qualitative, but the method seems to work well on ImageNet classifcation tasks. The method is also validated on a controlled dataset with known ground-truth features, where it performs as expected. Strengths: + The method is straightforward and seems to work well. + The method is evaluated with a modified dataset where the ground truth importance of concepts is known, or at least well controlled. I haven’t seen this done in many papers in this field and it’s a very nice addition to the work. + The paper is well-written and enjoyable to read. The investigation of the proposed method is quite thorough. Weaknesses: - The method is not particularly novel – there are other methods which localize CAVs in an image to provide local explanations. I think both of the automatic concept-extraction methods (ACE and ICE) described in Section 2 do this; the recent method CAVLI (https://ieeexplore.ieee.org/document/10208704) also does this. - This paper does not provide any comparisons to other methods. - Like other approaches based on TCAV, the method requires the user to identify the concepts of interest and curate training datasets to visually represent each concept. This limits the usefulness of non-automatic TCAV methods for general-purpose model explanations. - Like most other saliency map approaches, the explanations only focus on how concepts that are present in an image contribute to the decision. However, model decisions may also depend on the absence of features. This approach does not have a way to represent feature-absence in the explanation, which can lead to confusing explanations in some cases, as shown in the validation experiment. Technical Quality: 3 Clarity: 3 Questions for Authors: It would be interesting to see more results from tasks other than ImageNet classification, since these types of concept-based explanations seem very specific to ImageNet-like tasks (tasks where there is a specific object to classify which has easily-defined features like color, texture, shape, etc.). How would this method provide explanations for a task like facial recognition? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for acknowledging the merits of our work including the importance of validating the explanations. We agree that it is a step often overlooked in this field. Below, we comment on the identified weaknesses and questions. W1: Regarding concept extraction methods, they rely on segmentation of the input image to generate image patches and then clustering to produce a concept. This can be particularly useful since it doesn’t require much domain knowledge and effort from the user, but there are also limitations (as discussed in the related work). Furthermore, it is a different approach to ours as we aim to answer to a user concern about a specific concept. Regarding CAVLI, we agree that it is highly relevant to our work and therefore should be cited in our related work section. This method, however, is very different from ours. It relies on segmentation of the input in super pixels, for which it produces a new image from the segmented region and analyzes correlation with the CAV (only for the penultimate layer of each network). Our method, on the other hand, can produce a saliency map showing where the network is “seeing” that concept directly with a simple equation (1) and provides global explanations in terms of the attribution of a concept, not just the sensitivity of the model to it. W2: In Section 4.3, we provide a comparison with the TCAV Score on the validation experiment. We would have no problem adding also a comparison with the CDS Score proposed by CAVLI. However, the unavailability of the code makes it challenging as it should be re-implemented by us. Furthermore, the authors claim that the CDS Score resembles the TCAV Score, for which we already provide a comparison. On the other hand, a comparison with automatic concept-extraction methods would not make much sense as these methods are fundamentally different in their approach and answer different questions. W3: We agree that defining concepts through example images certainly can be considered a weakness as it requires some effort from the user (although we showed that this can be done with just 30 images), who could also miss some relevant concepts. However, if we want an answer to the question “was this concept important in this prediction?”, this requires defining the concept in some way. We believe that allowing users to get an answer to their concerns is an important step towards enhancing transparency and trust in AI systems and we think that users should decide whether this is worth the extra effort based on their specific use case. Furthermore, we are currently working on how to exploit Generative AI to reduce this effort. W4: We acknowledge that our method is not able to provide explanations in terms of absence of concepts, which could be an interesting future work to explore, but it is a rather complex task. Since the CAV is a representation of the concept in the activations space, this would require a way of producing a similar representation for the absence of a concept. It may also be that networks learn the absence of concepts through negative weights. For example, considering our validation experiment it may be that the network trained on 100% tagged images learned a strong bias towards predicting “zebra”, and associated a negative weight for the “C” and “T” concepts towards “zebra”. In this way, the network always predicts “zebra” unless there is a “C” or a “T”. However, this is just a hypothesis and further investigation is required in this direction. As stated in the future work section, we are currently working on how to include these negative weights inside the Visual-TCAV framework, which may be able to provide an answer to this hypothesis. Regarding your statement that the “Z” tag explanation is confusing, we respectfully disagree. While the explanation may be counterintuitive, is it factual. Indeed, we asked Visual-TCAV whether the “Z” was important or not for predicting “zebra” and the method correctly answered that it wasn’t. Q1: We decided to use ImageNet as it contains a vast and highly diverse set of classes and is the foundational dataset for many neural networks used in real-world scenarios, but we agree that it would be a nice addition to include results from a different dataset. To this end, we trained a VGG16 model on the CelebA dataset [1] to classify between male and female and tested the “bearded” concept. We provide two examples of explanations (one with and one without the beard) in the attached pdf. For concept examples we used people with beards, while people without beard as negative examples. Considering Explainable Facial Recognition (XFR) specifically, we acknowledge that while one could test concepts such as bearded or a certain nose/eyes shape, high-performance models may rely on very complex features that are difficult to define as human-relatable concepts. For this reason, we will add results from CelebA dataset and include in the limitations section that sometimes it may be difficult to define the concepts to be tested. However, this is a general limitation of concept-based explainability. In such cases, the outputs of automatic concept-extraction methods would also be quite difficult to interpret. We can’t say for certain, though, since, to our knowledge, none of these methods have been tested on these types of datasets. Regarding tasks other than classification, we are currently working on expanding Visual-TCAV to regression models. On this topic, our method can currently associate concepts to a higher output (e.g., bearded to a higher age). However, explaining regression requires the inclusion of negative weights to associate a concept also to a lower output and we are still working on it and leave it for future work. [1] Liu, Ziwei and Luo, Ping and Wang, Xiaogang and Tang, Xiaoou, Deep Learning Face Attributes in the Wild, ICCV 2015 --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I've increased my score, but I'm not sure how convinced I am about the weaknesses. The fact that the user needs to collect images for this method seems like a fairly major flaw that undercuts the "explainable" part of this method. The explanations are only interpretable in terms of the training images; if a concept is hard to visualise (or a user just doesn't collect a very representative set of images), the explanations won't make sense. But this paper does have some strong merits as well. --- Reply to Comment 1.1.1: Comment: Thank you for the quick response and for increasing your score. Regarding this last concern, we would like to add that while we agree that it would be very nice to get an explanation for specific concepts (which may be needed for various reasons) without having to define them through example images, whether it is actually possible is still an open research question and we are open to suggestions for future works in this direction. Furthermore, considering the possibility that users use example images that poorly represent a concept, with our method they have a saliency map as a visual feedback that indicates whether the learned CAV represent the intended concept and not a different one. The same cannot be said for TCAV, for which you may receive a high importance score because of another concept that happened to be in the background in many of the example images.
Summary: The paper introduces a novel technique, Visual TCAV, which unifies concept-based explainability with saliency maps. Visual TCAV produces local explanations in the form of saliency maps, which highlight the pixels in the image that represent a given user-defined concept. The visualization is enriched with an attribution score which represents the importance of a concept in the prediction of a given class. Visual TCAV can also produce global explanations by aggregating attribution scores of multiple images belonging to the same predicted class. Strengths: The paper is overall well written and experiments are discussed in detail. The technique addresses the limitations of TCAV and overcomes them by providing both a visual and quantitative analysis of a concept’s influence in a prediction. It also enriches the ability of TCAV of providing global explanations, allowing it to measure the influence of a concept and not only the sensitivity of a model to it. I find the visualization step to be particularly crucial for a fair analysis of convolutional neural networks and for bias detection. Weaknesses: Minor remarks: The use of the notation was inconsistent throughout the paper and some of the figures lacked clarity. In particular : • The index referring to the feature maps in the considered layer, k, is not explicitly mentioned in the text. It is understandable from Figure 1 that k refers to the index of such feature maps, but it should be written explicitly for better clarity, as it is frequently used in the formulas. • In line 145 the raw concept map is indicated with M_{raw}^c but in equation 1 the notation changes and becomes M^{c, raw} • In equation 3, the indices i,j are further introduced when denoting M_{ij}^c. I believe this refers to a pixel-wise notation, with i,j indexing the pixels in the rows and columns of the image, but it is not mentioned explicitly. The non uniformity in the notation is somewhat disturbing. • The sentence in line 168-169 seems to imply that additivity holds, while later on in the paragraph it is specified that the measure is concept-wise. I would suggest a rephrasing. • The normalized logits presented in line 184-186 could be better expressed with a formula, to avoid misunderstandings on their derivation. Moreover, in line 188, are the normalized attributions the same as the normalised logits? If yes, use a consistent terminology. • The normalised attributions are indexed by an index t. It is defined only in line 193 that t represents the target class. It should be clarified before its first appearance. • In equation 4 it appears for the first time the notation p_k^{c, norm}. Is it used to represented the pooled-CAV (p^c) rescaled to [0,1]? • There is no legend in Figure 2 to investigate the portrayed degrees of activations. In figure 2d the focus seems to be on the parashoot other than the sky. A legend would help interpret the figures more clearly. • I suggest the use of a colour-blind friendly palette for Figure 4 (for instance, figure 5 and 6 are better suited). • Figure 6 caption mentions a statistical significance test: how was the test conducted? Is it the same test described in the TCAV paper? Technical Quality: 3 Clarity: 3 Questions for Authors: 1) I am not sure I understand why it is necessary to multiply the integrated gradients with the feature maps to calculate the raw attributions. Could you elaborate on this choice? (Line 179-180) 2) In line 181 it is written that “the attributions add up to the logit value of the target class”. This seems like a wrong statement as the integrated gradients attributions sum up to the difference in output between the input image and the selected baseline. Did you check that this assumption holds? ie. That the black image has almost zero output score? 3) What about misclassified images? Does Visual TCAV help in understanding any possible prediction error? Did you investigate this problem in your experiments? 4) In the description of TCAV you mention the stability of CAV to small perturbations. Did you investigate the stability of your procedure taking into account the possible instability derived from Integrated Gradients? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: I find that the main limitation of the method is mainly related to the use of concept-based explainability. It may be possible to craft concepts that incorporate social biases or that may be misleading. Clearly, this is a broader issue, not related directly to this paper, but for which there may be an interesting ground for discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and for acknowledging the merits of our work. Below, we comment on the identified weaknesses, questions and limitations. W1: Yes, k is the index of the feature maps of a given layer. We'll mention it explicitly for better clarity. W2: You are correct, it is a typo from the notation used in a previous draft. The notation in line 145 will become M^{c, raw}. W3: The concept map (M^c) is a two-dimensional matrix with the same shape as a feature map. In Equation 3, we introduce indices to underline that the operation is element-wise. We will explicitly write that each element of M^c is indexed by i,j, so that M_{ij}^c refers to the concept map M^c at location (i, j). W4: Due to the overlapping nature of concepts, there can't be additivity within them. For instance, it doesn't make sense to add the attribution of "wheel" to the attribution of "car" as one is part of the other. Since it was not meant to be implied, we'll rephrase this sentence. W5: The logits and the attributions are different entities. The logits are the raw predictions of the network (pre-softmax) while the attributions measure the importance of each element of the feature maps and are computed through Integrated Gradients (IG). The origin of the confusion may come from lines 167-168 in which it was not clear that "logits" refers to the "raw predictions" and not to "the attribution". This sentence will be rephrased. We'll also add a formula for the normalization procedure. W6: You are correct. Line 187 will become: "To estimate the attribution of a concept (c) for target class (t),.." W7: Yes, we'll explicitly write this in the paper. W8: Red regions mean high activation for the concept, we'll explicitly state this in the caption and add a color scale. W9: Thank you for your suggestion. This would be a nice improvement in terms of accessibility. We'll adopt a color-blind friendly palette for Figures 4, 5, 6 and 9. W10: Yes, we used the official TCAV code for the comparison, including the statistical testing. We'll mention it explicitly in the text. Q1: For image models, IG requires integrating the gradients from a baseline to the actual image and then multiplying the integral to the input image minus the baseline to obtain the pixels attributions. Our idea is to apply IG in the activations space (i.e., the concepts space) by considering an intermediate layer as the input and integrating the gradients from a set of zero-filled feature maps (our baseline) to the actual feature maps and then multiplying the result of this integral to the feature maps (our input) minus the baseline which is zero. We hope this clarifies the procedure. It may be possible that referring to the result of the gradients' integral as "integrated gradients" and to the result of the multiplication as "attributions" could have created some confusion since the term “integrated gradients” is commonly used as a synonym for the "attributions", both referring to the result of the whole procedure including the multiplication with the input. To avoid ambiguities, we'll rephrase the description of this procedure. Q2: In our case, the baseline is a set of zero-filled matrices which, for all the tested networks, produce negligible logits (the sum of the attributions is usually within 1% of the logit value), but we agree that our statement is not exact and must be changed. Furthermore, since there could be cases in which the assumption does not hold, we’ll add to the paper the fact that this should be checked since if there are classes that already have a high score with the zero-filled features maps, this score is not influenced by concepts present in the images. Q3: Yes. In the ground truth experiment, we intentionally made some networks overfit, and we were able to explain the mispredictions by showing that the tags had a high importance. For instance, zebras erroneously classified as cucumbers could be explained by the importance of the "C tag" which led the network to the mistakes. Considering this question, we'll include more examples in which a certain concept contributes to a misprediction, since our method is independent from the correctness of the prediction (the cab image in Appendix C is classified as a jeep). In the attached pdf, we also provide a clear example in which an ox is classified as dalmatian with a high attribution for the “spotted” concept. Q4: Yes, we learn the CAV using the "difference of means" method, which was shown to be more robust than other methods such as SVMs [15]. This computation of the CAV and of the concept map are independent from IG, but we agree that IG could influence the stability of the Concept Attribution. However, the stability to small perturbations mainly depends on the number of images used as concept examples. Empirically we found that, with around 30-50 concept images, the output of the method changes very little by perturbing a few data points. Given the relevance of the matter, we'll add Appendix F replicating the stability experiment of Martin and Weller [15] but considering the Concept Attribution instead of the TCAV score. L1: We do not completely agree on this point, but we appreciate your openness to discussion. We agree that concept-based XAI has limitations, but alternatives like saliency methods are much more misleading and prone to cognitive bias because a highlighted region allows freedom of interpretation of what the network is "seeing" in that region. We aim at removing this layer of subjectivity by explaining which human-relatable concepts the network is recognizing. On the possibility to craft biased concepts, while its possible to do so, the explanation will reveal if the network is biased as well. For example, in the validation experiment, we expected the network to rely on the "Z" tag, but our method disproved it. Furthermore, crafting socially biased concept can even be useful to discover fairness problems in the networks decision making process. --- Rebuttal Comment 1.1: Comment: Thank you for your response to my questions and those of my fellow reviewers: I have found your attached pdf with additional examples particularly useful. In light of your clarifications, I have increased my score. --- Reply to Comment 1.1.1: Comment: Thank you for considering our rebuttal and for increasing your score. We are committed to improving the manuscript based on your feedbacks.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive feedback and useful suggestions. In addition to the detailed responses, this general rebuttal summarizes the identified strengths and the changes we’ll make considering the identified weaknesses and the suggested improvements. We apologize that due to space reasons we answered to each weakness/question/limitation based on the order in which they were provided by the reviewers, without rewriting the reviewer comment in each rebuttal. Strengths: **Writing quality**: *“The paper is overall well written and experiments are discussed in detail”* (Reviewer FKau); *“The paper is well-written and enjoyable to read”* (Reviewer Z23K); *“The presentation of the paper is clean and easy to follow. The methods are made simple to understand and are effective. Figure 1 was particularly effective at communicating their method”* (Reviewer oLg1). **Concepts Visualizations**: *“The method is able to add localization to the existing TCAV approach increasing its ability to explain black-box classifier CNNs”* (Reviewer oLg1); *“I find the visualization step to be particularly crucial for a fair analysis of convolutional neural networks and for bias detection”* (Reviewer FKau). **Validation Experiment**: *“The method is evaluated with a modified dataset where the ground truth importance of concepts is known, or at least well controlled. I haven’t seen this done in many papers in this field and it’s a very nice addition to the work”* (Reviewer Z23K); *“The validation experiment shows the faithfulness of their approach's ability to find the targeted concept in a set of example images”* (Reviewer oLg1). **Experimental Results**: *“Qualitative results show strong localization ability of their method to identify queried concepts”* (Reviewer oLg1); *“The method is straightforward and seems to work well”* (Reviewer Z23K). **Performance**: *"Their approach is relatively fast to run for local and global explanations"* (Reviewer oLg1). In the revised manuscript, we will: - Improve the notation and figures accessibility based on the suggestions of Reviewer FKau. - Rephrase some ambiguous sentences based on the comments received by Reviewer FKau and oLg1. - Add more examples of misprediction based on a comment received by FKau (like the one provided in the attached pdf). - Add results for additional datasets such as CelebA (like the ones provided in the attached pdf), as suggested by Reviewer oLg1 and Z23K. - Add an Appendix chapter with an experiment to assess the stability of the method to small perturbations based on a comment received by FKau. - Update the “Limitations and Future Work” section based on the comments from Reviewer oLg1 and Z23K. - Reference and discuss the related works mentioned by Reviewer oLg1 and Z23K. Pdf: /pdf/32c8e2167d7a45ba3de04747e3cc7351a4c67289.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models
Accept (poster)
Summary: This paper proposes a method to integrate visual prompts into MLLMs without requiring additional training. The key idea is to optimize a learnable latent variable to enhance the attention response to visual tokens during inference, thereby improving the model's ability to focus on specific regions in the visual input. Strengths: The proposed approach does not requires additional training for unseen datasets. Weaknesses: 1. Very poor writing. The use of abbreviations and full terms is very inconsistent. For example, "Multimodal Large Language Models" is spelled out in full on Line 29-30, while the abbreviation "MLLMs" is used frequently earlier in the text. Additionally, "Linguistic" in Line 34 should be "textual." There are many grammatical errors in Figure 's prompt: "What's color of hat the person wearing?" and output: "The person wearing the hat is wearing a green hat.". Even the title in section 4.2 contains grammatical errors. 2. This paper lacks novelty and is an incremental form of previously proposed methods with no innovative points. 3. In Line 3, the authors claim that attention connects visual tokens and textual tokens, but in Line 33, it changes to MLP. 4. The experimental results are insufficient and lack numerous baselines, such as LLaVA1.5[1], LLaVA-NeXT[2], Monkey[3], and Qwen-VL[4]. 5. The motivation for this study is insufficient. I think there is a baseline that by making the prompt descriptions clearer and more comprehensive based on the original MLLM. This baseline can also leverage MLLM's inherent ability to focus on specific regions. [1] Improved Baselines with Visual Instruction Tuning. [2] LLaVA-NeXT: Improved reasoning, OCR, and world knowledge. [3] Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models. [4] Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond Technical Quality: 2 Clarity: 1 Questions for Authors: See weakness. Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 1 Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal >#### **Q1: The use of abbreviations and full terms is very inconsistent. For example, "Multimodal Large Language Models" is spelled out in full on Line 29-30, while the abbreviation "MLLMs" is used frequently earlier in the text. Additionally, "Linguistic" in Line 34 should be "textual." There are many grammatical errors in Figure 's prompt: "What's color of hat the person wearing?" and output: "The person wearing the hat is wearing a green hat.". Even the title in section 4.2 contains grammatical errors.** We have provided the full term "Multimodal Large Language Models" and its abbreviation "MLLMs" in earlier text,such as lines 2 and 19. Thank you for pointing out the need for consistent terminology and other writing issues. We will correct these in the final version, including using "textual" instead of "linguistic" and addressing the grammatical errors. >#### **Q2: This paper lacks novelty and is an incremental form of previously proposed methods with no innovative points.** Thank you for your comments. We appreciate the opportunity to reiterate the innovations of our paper. Firstly, our motivation is novel. This paper introduces a setting for embedding visual prompts without additional training, enabling the integration of visual prompts into existing models conveniently. On the technical side, we leverage the idea of visual prompts and introduce a test-time prompt tuning strategy to adjust the attention distribution of MLLMs and facilitate the injection of visual prompts. This technique also offers valuable insights for improving the interpretability of MLLMs. Our contributions have been recognized by Reviewer VBnv as well. We will rephrase the contributions in the revised version, and your suggestions will help make our paper clearer. Thank you for your valuable feedback. >#### **Q3: In Line 3, the authors claim that attention connects visual tokens and textual tokens, but in Line 33, it changes to MLP.** In lines 5 and 34, we explain that specific MLP layers can influence the attention responses between visual and textual tokens. We provide a detailed analysis of this in Section 4.1, demonstrating how MLP outputs can control the interaction between these tokens. >#### **Q4: The experimental results are insufficient and lack numerous baselines, such as LLaVA1.5, LLaVA-NeXT, Monkey, and Qwen-VL.** Thank you for your constructive feedback. Our primary baseline is LLaVA1.5, as indicated in the "Experiment Details" section. We also present experiments on InstructBLIP in Table 5. Following your suggestion, we provide additional results in the following table. Qwen-VL is a trained referring MLLM, and LLaVA-Next is a recently released project supporting high-resolution input, both of which are classic and influential works. However, due to time and cost constraints, we conducted experiments on LLaVA-HR [1] as an alternative to LLaVA-Next. In the final version, we will include all these methods, including LLaVA-Next, for discussion and comparison | Model | Task | Vanilla | Ours | |---------------|------|---------|-------| |Training method|||| |**Qwen-VL**|ROC|72.6|-| | |RTC|64.7|-| |Training-free method|||| | **LLaVA1.5** | ROC | 54.72 | 60.59 | | | RTC | 53.57 | 61.22 | | **InstructBLIP** | ROC | 49.81 | 54.91 | | | RTC | 26.46 | 28.94 | | **LLaVA-HR** | ROC | 53.81 | 58.92 | | | RTC | 47.01 | 58.60 | | **Monkey** | ROC | 55.26 | 60.68 | | | RTC | 55.59 | 63.39 | [1] Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models. >#### **Q5: The motivation for this study is insufficient. I think there is a baseline that by making the prompt descriptions clearer and more comprehensive based on the original MLLM. This baseline can also leverage MLLM's inherent ability to focus on specific regions.** Our primary motivation is to introduce referring capabilities to MLLMs without significant training costs, rather than just proposing the task of injecting referring capabilities. Regarding the baseline suggested, it primarily serves to demonstrate the necessity of visual prompts. As discussed in Section 2.2, extensive prior work on training referring MLLMs has validated the importance of visual prompts. They simplify the referring process for users, reducing the need for precise text prompts and improving interaction efficiency. Moreover, clearly describing specific regions through text prompts can be challenging and unfriendly for users. Constructing such a baseline requires substantial effort, which is impractical. Thank you for your feedback. Indeed, the value of "referring MLLMs" has been explored and studied by many researchers [2-7], and this is not the primary contribution of our paper. The significance of this task lies in addressing scenarios that are challenging for language instructions alone, serving as a valuable complement to them. Constructing the baseline you mentioned is currently quite challenging and a key area for future research, which is one of the significant aspects of the referring MLLMs task. Our core contribution is in embedding visual prompt information into MLLMs without additional training, which is highly innovative for the field. This approach enables any MLLM to gain referential capabilities plug-and-play while maintaining generalizability. We hope this clarifies our contribution and addresses your concerns. [2] Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. [3] Ferret: Refer and ground anything anywhere at any granularity [4] Draw-and-understand: Leveraging visual prompts to enable mllms to comprehend what you want [5] Kosmos-2: Grounding multimodal large language models to the world [6] Shikra: Unleashing multimodal llm’s referential dialogue magic [7] Ferret-ui: Grounded mobile ui understanding with multimodal llms --- Rebuttal Comment 1.1: Comment: Hello Reviewer, The author has submitted a response to your comments. Whether or not it addresses your concerns, it would be greatly appreciated if you could acknowledge that you have reviewed the reply. --- Rebuttal Comment 1.2: Comment: Thanks to the authors' response. Although some of my concerns have been addressed, I believe there is still significant room for improvement and enhancement in this work. Therefore, I will maintain my score.
Summary: The paper introduces a training-free approach to improve the referring capabilities of multimodal large language models (MLLM). In particular, the authors iteratively adjust the attention maps using a learnable latent variable, which is based on energy functions. They empirically validate the efficacy of their method on referring classification tasks, leveraging the foundation of LLaVA. Strengths: + The paper is generally easy to read and follow. + The figures are clear, especially the visualization of attention maps under different conditions. + The proposed method is training-free and theoretically plug-to-play with different foundation models. Weaknesses: 1. The generalization ability of the proposed method has not been fully verified. (a) It's uncertain whether the method can be applied to MLLMs beyond LLaVA. As the foundational model strengthens, the method's effectiveness could potentially diminish. (b) The energy function, based on a soft mask, heavily relies on the quality of segmentation models. (c) As seen in Table 3, the results appear to be sensitive to hyperparameters. This raises the question: do we need to meticulously adjust the parameters for each model and sample? If that's the case, the practicality of this method could be questionable. 2. There is a naive baseline that requires discussion: directly cropping the referred region and feeding it into the LLM. 3. In Tab. 2&3, the tasks and compared models are indeed limited. It is recommended to discuss more state-of-the-art MLLMs on wider tasks and datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the Weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed it in Sec. 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal >#### **Q1: The generalization ability of the proposed method has not been fully verified. (a) It's uncertain whether the method can be applied to MLLMs beyond LLaVA. As the foundational model strengthens, the method's effectiveness could potentially diminish. (b) The energy function, based on a soft mask, heavily relies on the quality of segmentation models. (c) As seen in Table 3, the results appear to be sensitive to hyperparameters. This raises the question: do we need to meticulously adjust the parameters for each model and sample? If that's the case, the practicality of this method could be questionable.** (a) We have included more results, such as InstructBLIP model and Referring Description task in Tables 4 and 5 to demonstrate the generalization ability of our method. Our approach fundamentally enhances MLLMs, which primarily possess classification capabilities, by endowing them with referring abilities. Specifically, our method provides MLLMs with localization capabilities rather than relying solely on classification. Since classification relies on the foundational model, our method is orthogonal to the foundational model. As seen in Table 5, the less significant improvement on LLaVA-13b compared to LLaVA-7b is due to the increased complexity of the LLaVA-13b decoder, which makes optimization more challenging. Thus, the effectiveness of our method is related to the difficulty of optimizing the model, not necessarily diminished by the enhancement of the foundational model. Following your suggestions, we have included several methods beyond LLava, as detailed in Q3. We will create a dedicated section to discuss this area, which will make our paper's contributions even more robust. (b) Our method only requires an additional visual prompt during inference and does not depend on segmentation models, although they could be an optional technical route. In our approach, the soft mask is calculated using OpenCV functions, ensuring it is not heavily reliant on segmentation models' quality. (c) As demonstrated in Table 5, we conducted experiments on the InstructBLIP model using the same hyperparameters as LLaVA, achieving superior performance. Although fine-tuning the hyperparameters could lead to even better results, we plan to further improve our optimization strategy to make it more adaptable to different models. >#### **Q2: There is a naive baseline that requires discussion: directly cropping the referred region and feeding it into the LLM.** Thanks for your constructive suggestion. Following your suggestion, we have provided relevant results for reference in the following table. Given that LLaVA's image preprocessing resizes images to 224x224, the classification performance on small objects might be better, potentially resulting in superior performance compared to LLaVA + Blur. We will add the above discussion into the new version. | Model | ROC | RTC | |-----------------|-------|-------| | **LLaVA** | 54.72 | 53.57 | | **LLaVA + Blur**| 73.39 | 83.60 | | **LLaVA + Crop**| 82.04 | 88.78 | | **LLaVA + Ours**| 60.59 | 61.22 | >#### **Q3: In Tab. 2&3, the tasks and compared models are indeed limited. It is recommended to discuss more state-of-the-art MLLMs on wider tasks and datasets.** Thanks for your great commnet. We have included results for the RD task and the InstructBLIP model in Tables 4 and 5. It is also noteworthy that in the RD task, the performance differences among various foundational models are significantly influenced by the task and evaluation strategies. In contrast, in binary classification tasks like ROC and RTC, the results from different foundational models are relatively robust. This is why we followed Ferret in primarily experimenting on ROC and RTC tasks. We also provide more results in the following table. | Model | Task | Vanilla | Ours | |---------------|------|---------|-------| | **LLaVA1.5** | ROC | 54.72 | 60.59 | | | RTC | 53.57 | 61.22 | | **InstructBLIP** | ROC | 49.81 | 54.91 | | | RTC | 26.46 | 28.94 | | **LLaVA-HR** | ROC | 53.81 | 58.92 | | | RTC | 47.01 | 58.60 | | **Monkey** | ROC | 55.26 | 60.68 | | | RTC | 55.59 | 63.39 | --- Rebuttal Comment 1.1: Comment: Hello Reviewer, The author has submitted a response to your comments. Whether or not it addresses your concerns, it would be greatly appreciated if you could acknowledge that you have reviewed the reply. --- Rebuttal 2: Comment: Dear Reviewer, Thank you for taking the time to review our paper and offer your valuable suggestions. We have thoroughly addressed your concerns in the rebuttal. We kindly ask if you would consider raising the score for our paper.
Summary: This paper introduces a training-free approach to integrate visual prompts into MLLMs using learnable latent variables, aiming to enhance the model's interpretability and generalization. It adjusts visual tokens from MLP outputs and optimizes latent variables with an energy function to improve attention on relevant visual regions. Strengths: 1. This paper demonstrates and visualizes how the attention between prompt tokens and visual tokens differs across various layers. 2. Figures 2 and 4 are helpful to understand the method. 3. The quantitative and visualization experiments validate the effectiveness of the proposed approach. Weaknesses: 1. Figures 3(a) and 3(b) show visualization results for different values of η. The paper mentions that in 3(a), η is too small to effectively control the attention. However, the focus of attention map in 3(b) does not significantly differ from that in 3(a). 2. Table 3 shows that the highest accuracy is achieved when α = 400 and T = 3. Why, then, is the value of T ultimately chosen as 4? 3. Since there are no recent developments, contributions, or updates to the MLLM, its detailed presentation in Eq.s (1)-(3) might be unnecessary and could be omitted. Besides, some symbols are not defined, such as, $I_i$, $A_i^{(ct)}$. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. It is recommended to validate the effectiveness of the method on additional MLLMs. 2. Additionally, please note that the title listed on the paper submission does not match the title in the PDF. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal >#### **Q1: Figures 3(a) and 3(b) show visualization results for different values of $\eta$. The paper mentions that in 3(a), $\eta$ is too small to effectively control the attention. However, the focus of attention map in 3(b) does not significantly differ from that in 3(a).** Thank you for your question. To clarify, there is a noticeable difference in the response of the green hat region between Figures 3(a) and 3(b). This indicates that in certain layers, the attention in the green hat region has a higher response, and these layers may play a crucial role in determining the model's output. It is important to note that not all layers' visual tokens determine the model's output[1]. Some layers might purely serve the purpose of organizing language, which explains why we cannot achieve our goal by editing the attention in all layers directly. However, our approach can implicitly guide the model to cause changes in the attention responses in certain layers without excessively affecting the model's ability to organize language. We will clarify this point in the final version. >#### **Q2: Table 3 shows that the highest accuracy is achieved when $\alpha$ = 400 and T = 3. Why, then, is the value of T ultimately chosen as 4?** As outlined in the 'Impact of EMA and ES' section, we combined EMA (Exponential Moving Average) and ES (Early Stopping) strategies to enhance the stability and convergence speed of model optimization. We observed in Table 6 that a larger $T$ value, when combined with ES, led to better results. Therefore, we chose a slightly larger $T = 4$ to ensure adequate optimization for more challenging samples, even if it involved a slight trade-off in peak accuracy. This decision was based on our observation that a $T$ value of 4 and a relevancy score around 0.18 helped maintain consistency and robustness across various validation scenarios. >#### **Q3: Since there are no recent developments, contributions, or updates to the MLLM, its detailed presentation in Eq.s (1)-(3) might be unnecessary and could be omitted. Besides, some symbols are not defined, such as, $I_i$ and $A_i^{(ct)}$.** We understand the importance of clarity and will refine the manuscript to ensure that all terms are well-defined. Additionally, we will reassess the inclusion of these equations to ensure that the presentation remains concise and focused on novel contributions. >#### **Q4: It is recommended to validate the effectiveness of the method on additional MLLMs.** As demonstrated in Table 5, we have already validated our method on several additional models, such as InstructBLIP. These experiments provide preliminary evidence of the method's generalizability. We also provide more results in the following table. | Model | Task | Vanilla | Ours | |---------------|------|---------|-------| | **LLaVA1.5** | ROC | 54.72 | 60.59 | | | RTC | 53.57 | 61.22 | | **InstructBLIP** | ROC | 49.81 | 54.91 | | | RTC | 26.46 | 28.94 | | **LLaVA-HR** | ROC | 53.81 | 58.92 | | | RTC | 47.01 | 58.60 | | **Monkey** | ROC | 55.26 | 60.68 | | | RTC | 55.59 | 63.39 | >#### **Q5: Additionally, please note that the title listed on the paper submission does not match the title in the PDF.** Thank you for pointing this out. The discrepancy arose due to a last-minute change in the paper title after the abstract submission deadline. We will ensure that the final version of the paper has a consistent title across all submission materials to avoid confusion. [1] An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models. --- Rebuttal Comment 1.1: Comment: Hello Reviewer, The author has submitted a response to your comments. Whether or not it addresses your concerns, it would be greatly appreciated if you could acknowledge that you have reviewed the reply. --- Rebuttal 2: Comment: Dear Reviewer, Thank you for taking the time out of your busy schedule to review our paper and provide valuable feedback. We have addressed your concerns in the rebuttal. We kindly ask if you would consider raising the score for our paper.
Summary: This paper proposes a training-free ControlMLLM, which uses optimizable latent variables to inject visual prompts into multimodal large MLLMs. The core idea is to adjust the visual token outputs of the MLP during inference, to control the attention response and ensure that the text prompt tokens focus on the indicated visual regions. It enhances the intensity of the indicated regions in the attention maps by optimizing learnable latent variables based on an energy function, enabling reference to various visual prompts (including boxes, masks, scribbles, and points) without model training, fine-tuning, or additional data. The method demonstrates out-of-domain generalization and interpretability, providing a promising direction for integrating referential capabilities into MLLMs. Experiments show that the proposed model is effective. Strengths: 1), Prompt tuning for MLLMs is an interesting direction, and the proposed ideas are both effective and simple. Motivated by text-to-image works, ControlMLLM aims to control the attention map between the textual tokens and visual patches. This idea makes sense and provides a new direction to improve MLLMs. 2), The comparisons and ablations show the effectiveness of the proposed model. 3), The writing is clear, and the figures and tables help to understand the motivations. Weaknesses: 1), The visual prompt shows great improvements over the base model LLaVA, however, it requires additional guidance information and more inference time, which may limit the applications of ControlMLLM. Especially in some complex scenarios where the guidance signals are unavailable. 2), In addition, the region signal $r$ plays a core role during optimization, and it controls the output of MLLMs. What if $r$ itself is wrong? It may mislead the MLLMs. Technical Quality: 3 Clarity: 4 Questions for Authors: See above Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # **Rebuttal** >#### ****Q1: The visual prompt shows great improvements over the base model LLaVA, however, it requires additional guidance information and more inference time, which may limit the applications of ControlMLLM. Especially in some complex scenarios where the guidance signals are unavailable.**** Thank you for your comment. Indeed, all settings that apply MLLMs to visual grounding, such as Referring MLLMs, require visual prompts [1, 2, 3, 4, 5, 6], this is not unique to our approach. However, our setup allows visual prompts to be injected into MLLMs without additional training, offering greater flexibility. Additionally, our method does not require fine-tuning the large model, which provides stronger generalizability. These features make our model better suited for transfer to other complex scenarios. As shown in Table 2, our method can easily transfer referential capabilities to out-of-domain data. Regarding the time issue, the comparison is 5.78s vs 7.45s when model output about 400 tokens, as shown in Table 7. Our model is indeed slower, but the difference is acceptable. We will further optimize this aspect in future work. >#### ****Q2:  In addition, the region signal plays a core role during optimization, and it controls the output of MLLMs. What if itself is wrong? It may mislead the MLLMs.**** Thank you for your insightful comments and valuable suggestions. Your feedback has greatly guided our research direction. Visual prompts and text prompts, which are user-provided instructions during inference, play a crucial role in Human-Computer Interaction. While users should accurately specify the desired region, we acknowledge the risk of incorrect input potentially misleading the MLLMs. To address this, we propose incorporating validation mechanisms or fallback options in future versions to ensure robustness even when faced with inaccurate prompts. This approach aims to improve the reliability of the method in real-world applications. [1] Ferret: Refer and ground anything anywhere at any granularity [2] Draw-and-understand: Leveraging visual prompts to enable mllms to comprehend what you want [3] Kosmos-2: Grounding multimodal large language models to the world [4] Shikra: Unleashing multimodal llm’s referential dialogue magic [5] Ferret-ui: Grounded mobile ui understanding with multimodal llms [6] Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response, and I read the other comments as well. This paper makes an interesting idea of training-free prompt tuning for MLLM. As a result, I have decided to raise my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for carefully reading our paper and rebuttal, and for recognizing our work. We wish you the best of luck.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Accept (poster)
Summary: This paper investigates the knowledge boundaries of Large Language Models (LLMs) using semi-open-ended questions, revealing limitations in LLMs' understanding. The authors introduce a novel method involving an auxiliary model to discover low-probability ambiguous answers, constructing a dataset of 953 questions across 32 domains. Their findings show that GPT-4 frequently produces unqualified answers and struggles with accurate self-evaluation, highlighting the need for improved detection of LLM knowledge boundaries. Strengths: 1. This work introduces a novel approach to evaluate LLMs using semi-open-ended questions, revealing limitations in existing methods focused on close-ended questions. The proposed method of identifying low-probability ambiguous answers is innovative and well-motivated. 2. Experiments show significant improvements in understanding LLM knowledge boundaries, particularly where models like GPT-4 may hallucinate, enhancing evaluation reliability. 3. The paper is well-written and easy to follow. Weaknesses: 1. The evaluation scope of this paper is limited, as it only assesses the performance of GPT-4 Turbo, which may be insufficient to represent the overall performance of LLMs. Have the authors analyzed other mainstream large language models such as Claude or LLaMA? It would be better to provide the performance of other large language models on this dataset. 2. The practical implications of the findings are not fully demonstrated. It is recommended to provide specific examples of applications to illustrate how these findings can improve the reliability of LLMs in practical use. Further explanation is needed on how improving training data or algorithms can mitigate these issues and enhance the practical utility of the models. 3. The paper mentions that GPT-4 performs poorly on semi-open-ended questions, but it may not have thoroughly analyzed the different types of errors (such as factual errors, logical errors, etc.) and their causes. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Has the author analyzed the performance of other large language models except GPT-4 on semi-open datasets? 2. Can you provide a more detailed analysis of the types of errors made by GPT-4 on semi-open-ended questions? 3. How do you ensure the reliability and consistency of human annotations in verifying ambiguous answers? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable questions! We have incorporated all the suggested experiments and the results meet the expectation. We are confident that we have addressed all of your concerns as outlined below. Based on our new experimental findings and explanations, we appreciate it if you would reconsider the final evaluation of our work. ## Q1: It is better to evaluate more LLMs. Following your advice, we analyze the performance of the Claude model (claude-3-sonnet-20240229) on semi-open-ended questions and find it also performs poorly and generates many unsatisfying answers. | | Unqualifie Answer%(↓) | Inaccurate Evaluation %(↓) | Hallucinated Question %(↓) | | ------ | --------------------- | -------------------------- | -------------------------- | | GPT-4 | 40.15 | 28.47 | 82.90 | | Claude | 37.15 | 33.73 | 88.00 | The above table shows the evaluation results of two powerful LLMs. Results show that similar to GPT-4, Claude also performs poorly on the semi-open-ended questions. Due to the time limit, we randomly sample 100 questions from our dataset and evaluate the performance of Claude. As the most advanced LLMs perform poorly on this task, we need human annotation to fact-check each generated answer, which is time-consuming. Specifically, it takes 2 minutes for an annotator to assess and double-check the truthfulness of each answer. Our dataset contains 1k questions, each of which corresponds to 13 tail-answers on average for verification. Even if we hire more than 5 annotators, it takes us 72 hours (more than 7 days if they work 10 hours per day). ## Q2: Provide more application examples and practical implications of the findings. Perceiving LLMs' knowledge boundaries is important to understand and alleviate hallucination [1,2]. Ambiguous answers for semi-open-ended questions are highly likely beyond the knowledge boundaries of LLMs (see Sec 4.4). Discovering ambiguous answers benefits many applications, including: 1. It helps detect the knowledge scope of LLMs more faithfully. Many close-ended hallucination evaluation benchmarks face the danger of data contamination [3, 4]. Semi-open-ended questions are easy to design and correspond to a large number of undocumented answers; 2. Flagging ambiguous answers with higher uncertainty enhances the LLM outputs [5, 6]; 3. Identifying ambiguous answers helps achieve selective retrieval that augments LLM with indispensable external information while reducing the distraction of irrelevant data [7, 8, 9]. 4. It helps align LLMs for a more honest generation by teaching the LLM to admit its knowledge limit on the knowledge it is unfamiliar with (ambiguous answers) [10, 11, 12]. We also find that in real life, semi-open-ended questions are quite common, indicating that the potential impact of our work is quite large. To estimate the proportion of semi-open-ended questions, we randomly sample 1k questions from an open-source corpus and conduct statistics to find that 33.6% of questions are semi-open-ended. ## Q3: Detailed analysis of types of errors Following your suggestion, we categorize different types of errors made by our target LLM and analyze their causes according to evaluation results. | Error Types | Factual Inconsistency | Factual Fabrication | | ----------- | --------------------- | ------------------- | | Ratio | 91.45% | 8.55% | Following [1], we distinguish different types of hallucinations: 1. Factual inconsistency takes up 91.45% of errors for semi-open-ended questions. It happens when the answer can be grounded in real-world information, but mismatches certain requirements in the question. 2. Factual fabrication leads to 8.55% errors, which occurs when the answer is unverifiable from public sources. Besides, we found that 86.15% evaluated answers met some parts of the requirement in the question while failing to satisfy the rest requirements. This may be because some conditions in the question overshadow others, leading to unqualified answers [13]. Our focus is the detection of LLM's knowledge limitations. Logical errors are usually observed in reasoning tasks, which is not the primary focus of this work. We will study the logical errors for QA tasks in the future. ## Q4: How to ensure the reliability and consistency of human annotations? We hired 11 annotators with Master's degrees, and provided clear evaluation guidelines (see Line 563) and feedback to human annotators during the evaluation process, thereby ensuring the reliability of the annotations across the entire dataset. Finally, we cross-check the annotation results. ## Reference [1] A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions [2] Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method [3] An Open Source Data Contamination Report for Large Language Models [4] Investigating Data Contamination in Modern Benchmarks for Large Language Models [5] How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering [6] How to Approach Ambiguous Queries in Conversational Search: A Survey of Techniques, Approaches, Tools, and Challenges [7] Self-Knowledge Guided Retrieval Augmentation for Large Language Models [8] When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories [9] REPOFORMER: Selective retrieval for repository-level code completion [10] Alignment for Honesty [11] Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism [12] Rejection Improves Reliability: Training LLMs to Refuse Unknown Questions Using RL from Knowledge Feedback [13] Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models --- Rebuttal 2: Title: Kindly Request for Read our Response and Re-consider your Assessment Comment: Dear Reviewer frNN, We wish to express our sincere gratitude for your invaluable feedback! **We kindly request that you review our responses to your observations and consider revising your assessments accordingly**. We believe that our explanations and additional experiments have thoroughly addressed your queries and concerns, Should you have any additional questions or require further clarification, please feel free to contact us. Your final evaluation and potential score update would be greatly appreciated and would significantly contribute to the refinement of our work. Thank you for your dedication and thoughtful reviews. We look forward to your final ratings. Best regards, Paper 10289 Authors --- Rebuttal Comment 2.1: Comment: The responses have addressed my concerns. I have raised my score to 6. --- Rebuttal 3: Title: Thanks for reading our rebuttal and raising the rating from 3 to 6. Comment: Thank you for increasing your rating from 3 to 6! Thank you again for your valuable time, insightful suggestions, and encouragement! We appreciate your recognition that our method is "**innovative and well-motivated**", "**experiments show significant improvements**", and our paper is "**well-written and easy to follow**". It is our great honor to receive your support during the discussion phase. Best regards, Submisson10589 Authors
Summary: This paper focuses on detecting the “knowledge bounding” of the current large language models (LLMs), which would be helpful in handling the well-known hallucination problem in LLMs. In this paper, the authors explore in a new question answering setting (i.e. semi-open-ended questions). The authors employ LLM-based approach to construct semi-open-ended questions and obtain answers from a target LLM. Employing a open-source LLM, which is flexible to edit the parameters/variables, to detect black-box LLMs’ drawback. The proposed method that calculating the nearest semantic representation to select the related answers seems interesting. Finally, the paper not only constructs a dataset to find the knowledge boundary for GPT-4, but also indeed discovers 82.90% of GPT-4’s answers are not so satisfied. It also founds that 40.15% of its hard (ambiguous) answers generated are unqualified. Strengths: 1. The proposed method sounds novel for handling the knowledge boundary discover tasks. The method utilizes an open-source Large Language Model (LLM) to aid a black-box LLM, GPT-4, in identifying its knowledge limitations. They pinpoint words with low probabilities according to the model variable of the open-source LLM. Subtle modifications to the representation of the output layers facilitate the extraction of answers that are most analogous for LLMs. 2. The proposed semi-open-ended question task sounds important to explore. The task is more challenging than the current QA tasks, since the number of candidate answers and answer space (and may be the correct answers) is not fixed and deterministic. 3. The paper writing is quite clear and the paper organization is easy to follow. 4. This paper also fall into a popular and important direction. It tries to solve the effect of hallucination in LLMs in another way: different from case-by-case detecting the hallucination cases, this paper aims to discover the knowledge boundary of a given LLM (GPT-4). 5. The experiments are solid enough. It includes multiple base models (Llama with different size) and multiple evaluation metrics. The overall performance also includes study cases. 6. As GPT-4 is quite powerful, detecting the knowledge boundary of GPT-4 and find the shortcomings of GPT-4 is not an easy duty. The outcome of this paper is quite attractive, since it discovers 82.90% of GPT-4’s answers are not so satisfied with a simple LLama2 model. Weaknesses: 1. The proposed model works well on detecting the knowledge boundary in the setting of semi-open-ended question. Even if that setting makes sense, (1) I am not sure about how many questions in the real applications belong to “semi-open-ended question”. (2) how about the effectiveness of the proposed method on normal QA tasks (e.g. multiple-choice test). Could you please provide some insights on the proposed models’ potential strengthen on the normal QA tasks? 2. The model name used in the paper should by consistent. For example, the authors use both “LLaMA-2-13b” (in line 252) and “LLaMA-2-13B” (table 1) in the paper. 3. Typos: In line 252, it should be “we use two LLaMA-2-13b models”. The authors missed to add “model”. In table 1, the title of the second column should be “Auxiliary Model” instead of “Auxiliary Model Size”, since LLaMA-2-13B is a name instead of a kind of size. 4. What is the meaning of the underline results in Table 3? The authors should make it clear of each mark or notation in the paper is well-described. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Though the proposed model shows its effectiveness on semi-open-ended question setting, I am wondering the potential future work on normal QA tasks (e.g. multiple-choice test). Could you please provide some insights on the proposed models’ potential strengthen on the normal QA tasks? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors' limitation section indeed addressed some concerns about this paper, which is fine with me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful and encouraging feedback! We have carefully incorporated your suggestions and provided explanations as follows, aiming to enhance the quality and robustness of our research. ## Q1: How many questions in the real applications belong to "semi-open-ended questions"? Following your suggestion, we find that in real life, semi-open-ended questions are quite common. We extract all question statements from an open-source general-domain corpus OpenWebText [1] and randomly sample 1k questions. We conduct statistics to find that approximately 33.6% of the questions are semi-open-ended, indicating that the research problem we are addressing has strong practical significance. We distinguish semi-open-ended questions according by querying ChatGPT. ## Q2: Insights on the proposed models' potential strengthen the normal QA tasks. On normal QA tasks, our method can help discover highly delusive wrong answers by reducing the probability of the ground truth answer. By comparing them with the ground truth, we may explain the factual hallucination problems at a more granular level. Besides, they can be used to construct a more challenging benchmark for normal QA tasks by including more delusive wrong answers. Our approach of modifying the LLM representations to guide answer generation may provide insight for different kinds of normal QA tasks: 1. It may help alleviate the hallucinations in knowledge-extensive QA tasks via representation engineering. 2. Editing LLM representations considering existing answers can reduce the probability of semantically related words, helping to generate more diverse answers for open-ended QA tasks. Besides, ambiguous answers found by our approach benefit QA systems in many ways, including: 1. Flagging ambiguous answers with higher uncertainty enhances the LLM-based QA systems [2, 3]; 2. Identifying ambiguous answers helps achieve selective retrieval that augments LLM-based QA systems with indispensable external information while reducing the distraction of irrelevant data [4, 5, 6]. We will discuss these potential research directions in our revised paper and explore them in our future works. ## Q3: Inconsistent model name & grammatical errors. Thank you for your careful reading! We will unify the naming of model names and resolve all grammatical errors in the revised version. ## Q4: Meaning of the underlined results in Table 3. The underlined results are either incorrect or unverifiable according to the ground truth, belonging to "Unqualified answers" in our categorization. We will explain the underlined results in more detail in both the caption of Table 3 and the main text to make it easier to understand. ## Reference [1] Aaron Gokaslan and Vanya Cohen. 2019. OpenWebText Corpus. [2] How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering [3] How to Approach Ambiguous Queries in Conversational Search: A Survey of Techniques, Approaches, Tools, and Challenges [4] Self-Knowledge Guided Retrieval Augmentation for Large Language Models [5] When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories [6] REPOFORMER: Selective retrieval for repository-level code completion --- Rebuttal Comment 1.1: Comment: The authors have provided some detailed responses to the reviews and questions, which makes sense with me. I will keep my positive score.
Summary: In this submission, the authors aim to explore the detection of large language models’ (LLMs) knowledge boundary, which is a borderline to tell us what can LLMs really know. The detection of knowledge boundary wound play a crucial role to help the researchers to deal with hallucination. Different from the widely-used QA setting, the paper focuses on another QA setting (semi-open-ended questions), which may be ignored by other researches but seems common in our real life. To detect the knowledge boundary, the authors hire a small-scale open-source model (Llama) to help the GPT-4 to detect GPT-4’s boundary. This paper have some interesting findings. For example, GPT-4 underperform the auxiliary model, LLaMA-2-13B in 50% cases. Strengths: 1. The authors explore in a new question answering setting (i.e. semi-open-ended questions). This setting is unexplored in QA tasks around LLM, and it seems quite interesting and useful for LLMs. 2. Detecting the knowledge bounding is a promising and crucial direction to achieve a better usage of LLM and deal with the notorious hallucination problem. 3. The proposed idea makes sense and novel enough, which use a open-source LLM to assist the black-box LLM (GPT-4) to detect GPT-4’s knowledge boundary by accessing the prob dist and find the low-prob words. The minor changing of the representation of output hiddens helps to obtain the most similar answers for LLMs. 4. The paper writing is quite clear to follow with detailed graphical descriptions (the overview figure). 5. The performance of the proposed methods seems effective in detecting GPT-4’s knowledge boundary. It found that in some (50%) cases, GPT-4 underperform the auxiliary model, LLaMA-2-13B. Weaknesses: 1. The setting of “semi-open-ended questions” is quite interesting and seems useful and common in our real life. However, is the term of “semi-open-ended questions” widely used in the QA research area? Although I am not so familiar with with QA and its terminology, I suggest the authors conduct a fully survey of the related terms and explain more about the meaning of “semi-open-ended questions” in the next version. 2. The proposed method works well on the “semi-open-ended questions”. It would be better to give a detailed statistic (e.g. questionnaire) or empirical study or related analyses about “the proportion of semi-open-ended questions out of the whole QA scenario”. It shows the practical use of the proposed method in our real life. 3. Could you please illustrate the full version of the cases in Table 6 and Table 7 (Some cases are omitted due to the space limit)? Technical Quality: 4 Clarity: 3 Questions for Authors: 1. As mentioned in Weaknesses, could you please give a fully version of cases in Table 6 and 7? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Yes, the limitation section addressed by the authors seems fine with me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are immensely grateful for your insightful and positive comments. we have addressed each point with careful consideration to ensure that our findings are presented with greater precision and rigor. ## Q1: Explanation regarding the meaning of "semi-open-ended questions" Thank you for your insightful comment and suggestion. Based on our research, we have not found evidence of the term "semi-open-ended questions" being widely used in the QA research area. These questions, though having multiple potential correct answers, are challenging to fully enumerate. For example, when asked to "Tell me about some exercise habits that are easy to overlook but are good for your health," there are many correct answers, yet the question remains constrained by the context of 'exercise habits', 'easy to overlook' and 'good for your health'. In the next version, we will provide a more detailed explanation of the term "semi-open-ended questions" including diverse examples to facilitate a comprehensive understanding of our study. ## Q2: The proportion of semi-open-ended questions out of the whole QA scenario Following your suggestion, we find that in real life, semi-open-ended questions are quite common. We extract all question statements from an open-source general-domain corpus OpenWebText [1] and randomly sample 1k questions. We conduct statistics to find that approximately 33.6% of the questions are semi-open-ended, indicating that the research problem we are addressing has strong practical significance. We distinguish semi-open-ended questions by querying ChatGPT. ## Q3: The full version of the cases in Table 6 and Table 7 Following your suggestion, we demonstrate the full cases in Table 6 and Table 7 of our paper as follows. We will provide the full version in our next version. | Semi-open-ended Question | Tell me a list of foods that are rich in Vitamin A but low in fat. | | ------------------------------------------- | ------------------------------------------------------------ | | GPT-4 Response for Semi-open-ended Question | 1.Carrots 2. Spinach 3. Sweet Potato 4. Kale 5. Butternut Squash 6. Red Bell Peppers 7. Apricots 8. Broccoli 9. Romaine Lettuce 10. Cantaloupe 11. Yellow Corn 12. Tangerines 13. Tomatoes 14. Papaya 15. Peaches 16. Mangoes 17. Watermelon 18. Green Peas 19. Kiwi 20. Honeydew Melon 21. Asparagus 22. Black-Eyed Peas 23. Plums 24. Pineapple 25. Blueberries 26. Grapefruit 27. Oranges 28. Green beans 29. Brussel sprouts. 30. Guava 31. Dandelion Greens 32. Swiss Chard 33. Mustard Greens 34. Arctic Char 35. Prunes 36. Nectarines 37. Rose Hips 38. Parsley 39. Watercress 40. Radish 41. Red Cabbage 42. Beet Greens 43. Endives 44. Arugula 45. Pomegranates 46. Raspberries 47. Red Leaf Lettuce 48. Pumpkin 49. Bok Choy 50. Turnip Greens 51. Red Grapes 52. Passionfruit 53. Collard Greens 54. Zucchini 55. Strawberries 56. Clementines 57. Lima Beans 58. Cucumbers 59. Yellow Bell Peppers 60. Pawpaw. 61. Winter squash 62. Cantaloupes 63. Tuna 64. Persimmon 65. Pink Grapefruit 66. Honeydew melon 67. Guavas 68. Papayas 69. Passion fruit 70. Kiwi fruit 71. Blackberries 72. Cherries 73. Goji berries 74. Raspberry 75. Fenugreek leaves 76. Okra 77. Chayote 78. Eggplant. | | Auxiliary Model Response | 1.Bell peppers 2. Liver 3. Collards 4. Carrots 5. Spinach 6. Yogurt 7. Salmon 8. Whole grains 9. Fortified cereals 10. Clams 11. Milk 12. Shrimp 13. Whole eggs 14. Meat such as beef liver 15. Plantains 16. Fresh herbs like basil, parsley and dill 17. Cheese (low-fat varieties) 18. Vegetable oils such as soybean oil or corn oil | | Semi-open-ended Question | Tell me a list of world capitals where more than two languages are considered official. | | ------------------------------------------- | ------------------------------------------------------------ | | GPT-4 Response for Semi-open-ended Question | 1. London, United Kingdom 2. Montreal, Canada 3. Brussels, Belgium 4. Ottawa, Canada 5. Bern, Switzerland 6. Johannesburg, South Africa 7. Suva, Fiji 8. Biel/Bienne, Switzerland 9. Singapore, Singapore 10. Luxembourg City, Luxembourg 11. New Delhi, India 12. Addis Ababa, Ethiopia 13. Nairobi, Kenya 14. Helsinki, Finland 15. Vienna, Austria 16. Maputo, Mozambique 17. Oslo, Norway 18. Reykjavik, Iceland 19. Valletta, Malta 20. Nicosia, Cyprus 21. Dublin, Ireland 22. Vaduz, Liechtenstein 23. Tirana, Albania 24. Andorra la Vella, Andorra 25. Baku, Azerbaijan 26. Manama, Bahrain 27. Dhaka, Bangladesh 28. Saraevo, Bosnia and Herzegovina 29. Sofia, Bulgaria 30. Copenhagen, Denmark 31. Tallinn, Estonia 32. Tbilisi, Georgia 33. Baghdad, Iraq 34. Jerusalem, Israel 35. Astana, Kazakhstan 36. Beirut, Lebanon 37. Tripoli, Libya 38. Kuala Lumpur, Malaysia 39. Abuja, Nigeria 40. Islamabad, Pakistan 41. Doha, Qatar 42. Bucharest, Romania 43. Riyadh, Saudi Arabia 44. Madrid, Spain 45. Lome, Togo 46. Ankara, Turkey 47. Kiev, Ukraine 48. Dubai, United Arab Emirates 49. Canberra, Australia 50. Praia, Cape Verde 51. N'Djamena, Chad 52. Zagreb, Croatia 53. Prague, Czech Republic 54. Beijing, China 55. San Jose, Costa Rica 56. Santo Domingo, Dominican Republic 57. Quito, Ecuador 58. Cairo, Egypt 59. Port Vila, Vanuatu 60. Windhoek, Namibia 61. Kathmandu, Nepal 62. Wellington, New Zealand 63. Asuncion, Paraguay 64. Lisbon, Portugal 65. Moscow, Russia 66. Dakar, Senegal 67. Ljubljana, Slovenia | | Auxiliary Model Response | 1. Brussels, Belgium 2. Ottawa, Canada 3. Panama City, Panama 4. Lima, Peru 5. Toronto, Canada 6. Paris, France 7. Hong Kong, China 8. Moskva, Russia 9. Rome, Italy 10. Malabo, Equatorial Guinea 11. Stockholm, Sweden 12. Athens, Greece 13. Tokyo, Japan 14. Jakarta, Indonesia 15. Mexico City, Mexico 16. Bangkok, Thailand 17. São Paulo, Brazil 18. Buenos Aires, Argentina 19. Caracas, Venezuela 20. Bogota, Colombia | ## Reference [1] Aaron Gokaslan and Vanya Cohen. 2019. OpenWebText Corpus. --- Rebuttal Comment 1.1: Title: Thanks for authors' responses Comment: Thanks for your responses. I have read the authors' rebuttal, which has solved my concerns. After carefully considering the others' review comments and the author's rebuttal, I will keep my score.
Summary: The paper presents a study of generating answers to questions from the tail of an LLM's distribution (GPT-4). They begin by constructing a dataset by generating questions with multiple answers from GPT-4. For each question, they continue decoding multiple answers and define the first 75% of generated answers as "common-sense answers" and the last 25% as "ambiguous answers". Their objective is to then to develop methods for generating such "ambiguous answers" from GPT-4 more efficiently, not relying on prompting many times (as they did to construct their dataset) or decoding parameters (which are not fully accessible for many API services). Their approach relies on using an auxiliary language model (LLama) to generate low-probability answers by reducing the probability of existing answers.They evaluate their method for generating low-probability answers using their constructed dataset, evaluating its ability to recover "ambiguous answers" as defined above. They conclude their work by performing analysis of these tail answers ("ambiguous" answers in their dataset and answers generated using their method), using retrieval+GPT-4 to verify each of these tail answers for correctness. They find that roughly half of these answers could were correct using thier system. Strengths: This work develops a method for generating a large, diverse answer candidate set, which may be useful for a number our other tasks. The authors perform several ablation experiments, demonstrating the efficacy of different components of their method and sensitivity to hyperparameters. Weaknesses: Methods for generating a diverse set of answer candidates are evaluated against their ability to recover the set of tail-answers from GPT-4. Analysis, however, demonstrated that roughly half of these tail-answers are incorrect. Evaluations are therefore designed to generate answers that match GPT-4's tail distribution and generating the same set of incorrect answers. The evaluation dataset is quite small and results only demonstrate minor improvement without significance testing. Many components of the work lack explanation. The authors reference "human annotation" for verifying answers; however, it is not clear how this is done, or what the exact instructions were. While the checklist notes that full annotation instructions and compensation details were provided in supplementary materials, but I did not find these. The retrieval-based system used for validating answers also lacks description, only noting that they used Microsoft Copilot to perform retrieval. Technical Quality: 3 Clarity: 2 Questions for Authors: See the last point in weaknesses above. Also, its somewhat unclear to me how LLAMA model is used as an auxiliary model for GPT-4. Is it auxiliary because GPT-4 can be used to validate generated answers? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See final note under weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable questions! We have incorporated the suggested experiment and provided important clarifications. We hope that we have addressed your concerns and resolved possible misunderstandings. Based on our clarifications, we would greatly appreciate it if you would reconsider the final evaluation of our work. ## Q1: Clarification on the possible misunderstanding of our task We kindly point out "Methods ... to recover the set of tail-answers from GPT-4 … roughly half of these tail-answers are incorrect." in your review is a misunderstanding. It should be clarified that "Methods ... are evaluated based on their ability to identify shortcomings (new ambiguous answers) in the target LLM." Our task is not to “recover the set of tail-answers from GPT-4”. Instead, **our motivation is to identify LLMs’ shortcomings by discovering more different ambiguous answers for LLMs where they tend to make mistakes** (beyond the knowledge boundary of GPT-4), refer to Lines 11, 39 and 61 of our paper for details. In next version, we will be clearer about our task to avoid such misunderstandings. ## Q2: Unclear about how the auxiliary model is used Intuitively, **the auxiliary model aims to generate hard (ambiguous) answers that reveal GPT-4's shortcomings**—answers that GPT-4 is easy to make mistakes (as discussed in Lines 71 and 181). That is, we employ the auxiliary model (LLaMA) to uncover many new ambiguous answers that GPT-4 struggles to produce. Our findings show that **50% new ambiguous answers found by the auxiliary model reflect the shortcomings of GPT-4** (see Sec 4.5). Specifically, we use the Pseudo-inverse model embedding to estimate the nearest semantic representation of the existing answers (from GPT-4), then reduce their generation probability to produce new answers with lower probabilities (see Sec 3.3). ## Q3: Improvement is minor Following your suggestion, we conducted a bootstrap significance test to compare our method with the baselines. We found that the improvement in the Average Overlap Ratio (AOR) is **statistically significant under the t-test with p<0.05**. This indicates that our method effectively reduces answer repetition without compromising overall performance. The core empirical contribution of our work extends well beyond the results presented in Table 1 of our paper. **A more significant contribution is identifying GPT-4's shortcomings** (please see Line 85): Our findings reveal that GPT-4 yields unsatisfactory results in 82.9% of the questions. Furthermore, about 50% of the new ambiguous answers identified by our method fall outside the knowledge boundary of GPT-4 (see Sec 4.5). Our performance has been acknowledged by all the other reviewers. Reviewer frNN praises our work, stating, "Experiments show significant improvements in understanding LLM knowledge boundaries", Reviewer hUqe comments, "The experiments are solid enough", and Reviewer bcni observes, "The performance of the proposed methods seems effective in detecting GPT-4’s knowledge boundary." ## Q4: The dataset is quite small **Our dataset is comparable in size to many other hallucination evaluation datasets within the research community**, as shown in the following table. It covers multiple domains and is highly effective at identifying the knowledge boundary of GPT-4. It successfully identifies shortcomings in GPT-4's responses for 82.9% of the questions, using only vanilla prompts. | Dataset | TruthfulQA(Lin et al., 2022) | HaluQA(Cheng et al., 2023) | FreshQA(Vu et al., 2023) | FELM(Chen et al., 2023d) | ChineseFactEval(Wang et al., 2023a) | | ------- | ---------------------------- | -------------------------- | ------------------------ | ------------------------ | ----------------------------------- | | Size | 817 | 450 | 600 | 817 | 125 | **Constructing and expanding the dataset is expensive and time-consuming.** Specifically, It takes a human annotator about 2 minutes to assess and double-check the truthfulness of each answer. Our dataset contains approximately 1k questions, each with an average of 13 tail-answers to be verified. Considering the requirement for annotators to read and assess the credibility of the retrieved information (following our annotator principle in Appendix. A), the total working hours are 435 hours. As we paid 8 dollars per person per hour, it cost us 3483 dollars to construct the dataset. Even if we hire more than 5 annotators, it would still take more than 72 hours (**more than 7 days if they work 10 hours per day**), exceeding the time limit of the rebuttal period. ## Q5: Evaluation and human annotation guidance need a clearer explanation We kindly point out that we have introduced the human annotation procedure in Appendix A and provided full guidelines in the file named **"human guide.docx" in the supplementary material**, as well as a reminder in Appendix G. Specifically, in human evaluation, we ask the annotators to read the judgments generated by the retrieval-based evaluation system (Microsoft Copilot), assess the authority of the retrieved information, and evaluate the degree of certainty of the tone of the Copilot judgments. Then, annotators categorize answers into correct, incorrect, and unverifiable (see Line 563 of our paper and **the fourth note in "human guide.docx" in the supplementary material**). In retrieval-based evaluation, we instruct a RAG system with well-designed instructions (please see details in Sec 3.4) to verify the truthfulness of each tail answer. Specifically, we concatenate each candidate's answer with the question and prompt Microsoft Copilot to search online for related information, generate a summary, and make judgments (see Appendix G for full instructions). In our revised paper, we will explain these components in the experimental setting to avoid such confusion. --- Rebuttal 2: Title: Kindly Request for Read our Response and Re-consider your Assessment Comment: Dear Reviewer e3tW, We wish to express our sincere gratitude for your invaluable feedback! **We kindly request that you review our responses to your observations and consider revising your assessments accordingly**. We believe that our explanations and additional experiments have thoroughly addressed your queries and concerns, Should you have any additional questions or require further clarification, please feel free to contact us. Your final evaluation and potential score update would be greatly appreciated and would significantly contribute to the refinement of our work. Thank you for your dedication and thoughtful reviews. We look forward to your final ratings. Best regards, Paper 10289 Authors --- Rebuttal Comment 2.1: Title: Any Remaining Concerns and Further Advice Comment: As the response phase draws close, we are happy to explain if you have any remaining concerns or further advice. Your expertise and constructive criticism are invaluable to us, and we are keen to utilize the remaining time effectively to address any of your remaining or new questions. **We have received the responses from Reviewer frNN and hUqe.** Reviewer frNN recognize all the additional experiments and explanations and thinks they satisfy the reviewers' concerns. **The reviewer raised the rating from 3 to 6. Reviewer hUqe is also satisfied with our rebuttal and keeps the positive rating (7).** Thank you once again for dedicating your valuable time to our paper! Best regards, Submisson 10289 Authors --- Rebuttal 3: Title: Any Unresolved Concerns on Our Work? Comment: Dear Reviewers e3tW, We are sincerely grateful for your thoughtful feedback and suggestions! We have taken your observations to heart and addressed all the concerns you raised. **All other reviewers have responded positively to our rebuttals**. Specifically, we are thankful to **Reviewer frNN for reading our rebuttal and for increasing the rating from 3 to 6**, which indicates our new experiments and explanations are quite satisfactory. Reviewer hUqe has also expressed satisfaction with our rebuttal and has decided to maintain the positive rating of 7. Furthermore, **Reviewer bcni has noted: “After carefully considering the others' review comments and the author's rebuttal, I will keep my score” (8).** As the discussion period draws close, we sincerely invite you to review our responses and reconsider your assessments. Should you have any unresolved concerns, please do not hesitate to contact us. Thank you once again for your dedication! We look forward to your final ratings! Best regards, Paper 10289 Authors
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Rule Extrapolation in Language Modeling: A Study of Compositional Generalization on OOD Prompts
Accept (spotlight)
Summary: This paper studies rule extrapolation, one OOD behavior, of autoregressive LLMs on different models to understand the effect of model's architecture on this specific ability. The paper also introduces a normative theory for OOD prompt completion, which well explains the empirical observation about the training dynamics enabling the rule extrapolation. Strengths: The paper studies an interesting and important question for language models: how different models do extrapolation and how well they are. It is well-structured and clearly presented. The authors conduct extensive and well-designed simulations, yielding valuable insights into the rule extrapolation capabilities of various models. Additionally, the paper introduces a normative theory to elucidate the training dynamics associated with rule extrapolation observed in practice. This is a very good starting point for the community to investigate the OOD problem for the language models. Weaknesses: The paper only investigates four models with fixed sizes. It is commonly acknowledged that the model capacity rely on the model size, which is not adequately addressed in the paper. While the study provides a general impression of the models' capabilities on different tasks, it remains unclear whether the observed differences are due to variations in model size, specific structural characteristics of each model, or a combination of both. Technical Quality: 4 Clarity: 4 Questions for Authors: I have several concerns: 1. Does the model size significantly impact performance? For instance, if the size of the Transformer model is increased, is there a potential for a substantial improvement in its performance on regular grammars? 2. Contemporary language models typically use sampling rather than greedy decoding to generate sequences. If the sampling technique is altered, would the results remain consistent? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: See weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their positive evaluation of our work, in particular highlighting its relevance, good structure, clarity and deeming our experiments extensive and well-designed. We agree with the reviewer that beyond the present experiments, there are various other options and ablations to evaluate. We have carried out additional experiments on: - Transformer model size - Sampling instead of greedy decoding - Hyperparameter ablation - A new context-sensitive language - The xLSTM architecture Please find the results of your suggested experiments under ‘Transformer size ablation’ and ‘Sampling’ below. The results of the rest of the experiments are in [our general response](https://openreview.net/forum?id=Li2rpRZWjy&noteId=1xbJMvUYnC). ## **Transformer size ablation**: As suggested, we tested varying size settings (different numbers of layers and heads) for the Transformer architecture to determine whether increasing size can improve performance on regular languages. As shown in Figure 7(b) (cf. the uploaded pdf), increasing the model size does not meaningfully improve performance on regular languages; the best values remain those originally used (`num_decoder_layers = 7, num_heads = 5`). For non-regular languages, the Transformer already outperformed the other architectures. ## **Sampling**: Our initial results use greedy decoding, but we conducted experiments to evaluate the sampling method for next token prediction. As shown in Figure 6(a), we conclude that while the Transformer is the best choice with greedy decoding (except for regular languages where LSTM performs better), LSTM appears to excel when using sampling. These results also open up new interesting future directions, e.g., investigating the influence of different temperature values in the softmax. We hope that our additional experiments have addressed the reviewer’s concerns. --- Rebuttal Comment 1.1: Comment: Thanks for your reply! I am satisfied with the answers and I don't have any additional concerns or questions. --- Reply to Comment 1.1.1: Comment: We are delighted that our answers addressed your concerns. We also appreciate your positive ratings of "excellent" for soundness, presentation and contribution. If possible, we would be grateful if you could consider raising the overall rating. If there are any further improvements we could make to achieve a higher score, we would be more than happy to address them.
Summary: # Problem: We lack systematic understanding about the Out-Of-Distribution (OOD) behaviours of autoregressive language models (LMs), such as in-context learning with natural languages prompts, despite successful deployment of LMs in such OOD situations. Natural languages (NLs) prompts, i.e. real-world data, are acknowledged as too complex to systematically study OOD behaviours. # Contributions: Formal languages have obvious practical relevance for programming languages and formal mathematics, despite their dissimilarities with NLs. In contrast to NLs, formal languages enable studying in a systematic fashion rule extrapolation (as a special case of compositional generalization (systematicity)/OOD behaviours) and therefore provides a systematic framework to analyze and better understand LMs. Thus, the paper defines and empirically evaluate rule extrapolation for simple formal languages, with linear, recurrent (LSTM), transformers and state space models. The paper also investigates whether the emergent capabilities found in Transformer-based LMs are also present in simpler models? Transformers models are found to surprisingly struggle with regular languages while they outperform other models with other categories, i.e. context-free and context-sensitive languages. LSTM and SSM models are found to indeed have some emergent capabilities for OOD behaviours, albeit to a lesser extent than Transformers, and the LSTM struggles less with regular languages. Finally, the paper proposes a non-parametric prior and prediction scheme for OOD prompt completion using Solomonoff induction, and discusses it in comparison to recent Algorithmic Information Theory frameworks, towards ‘building and assessing future practical models’. Their proposed prior is found to match the training dynamics of the Transformers on rule extrapolation for a context-free language ($L_3=\{a^n b^n\}$). Strengths: # Strengths: ## Originality: SO1: The paper is as original as it gets, as far as I am concerned, and I appreciate how it tries to still build bridges with previous work, mainly as it discusses how their normative approach relates to previous Algorithmic Information Theory methods. ## Quality: SQ1: I appreciate section 2.2.’s discussion between Reizinger et al.[2024] and Zhou et al., [2023], and would encourage the authors to expand further on the impact on this work, possibly in the Discussion section. ## Clarity: SC1: I appreciate the clarity externalisation approach used in Section 5.1. ## Significance: SS1: Section 4 - Regular grammars, the paper proposes insights about the Transformer’s surprisingly low performance by relating it to previous work on parity ( but please see WS1 below). Weaknesses: # Weaknesses: ## Originality: Nothing to report, I find this paper as original as it gets. ## Quality: WQ1: section 2.1 does not describe recursively enumerable languages nor explain why they are not considered in this paper. Adding those considerations could be one way to paint a more complete bigger picture about the current work and therefore enhance its quality. ## Clarity: WC1: section 2.2. starts directly with ‘statistical generalization’ and ‘identifiability’ without having defined them, which might possibly hinders the readability of the paper for readers that are unfamiliar with this (relatively-)recent literature. WC2: missing ‘that’ or ‘which’ on ln94, possibly --> ‘formal languages _that_ are’ … WC3: typos on ln208: ‘largest extent(64%)’ --> ‘..(66%)’ ; on ln209 : ‘model again due’ --> ‘model _is_ again due’ WC4: Figure 2 left : it is unclear what is represented given the range of the legend (from 0 to -15), please clarify? Similarly, in Figure 2 right, I do not understand why is it a _sum_ of probabilities and not just the probability or likelihood, please clarify? ## Significance: WS1: Following SS1 above, I think the paper could have a greater impact by possibly reproducing the parity experiment with the current architectures and propose correlation measures between the scores on OOD (R1/R2) and the parity accuracy, for instance. WS2: Providing results for the recently proposed xLSTM would increase the impact of the paper. Thus I would like to encourage the authors to consider adding it. Technical Quality: 3 Clarity: 3 Questions for Authors: # Questions: Please see above, but mainly: - WC4 # General advice: I would like to propose the authors to present section 4.1 after section 5 (theory before experimental validation) for increased impact. Indeed, when reading the paper the first time, I was surprised to find section 4.1 and could not understand what did it brought to the discussion. After having read section 5 it became clear that section 4.1 is an experimental validation to the theoretical concerns of section 5, thus my recommendation to change the ordering. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: ## Limitations: The paper acknowledges limitations in terms of external validity with respect to different architectures and different attention or encoding mechanism. # POST REBUTTAL UPDATE : Most of my concerns have been addressed through the rebuttal in a satisfiable way, thus I am increasing my overall rating to 8, as well as contribution and presentation ratings to 'good'. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We warmly thank the reviewer for their positive evaluation of our work, in particular highlighting its originality, clarity and interesting nature. Please find our replies to your comments and suggestions below. ## WQ1 (section 2.1 does not describe recursively enumerable languages) We added a discussion of recursivey enumerable languages. We omitted these similar to Deletang et al, 2022, as they require an infinite tape to simulate, which is impossible. ## SQ1 (Expand discussion about further impact) Thank you for your feedback, we have expanded on this in our discussion, which we summarise here. While other OOD generalisation types were examined in the literature (Ruoss et al, 2022; Deletang et al. 2022; Ahuja and Mansouri, 2024), this is the first work studying/evaluating rule extrapolation. This novel concept has the potential to impact LLM research both on conceptual and practical levels: - General compositional generalization notions examine whether from learning multiple concepts/rules separately, the model can understand the composition of the concepts/intersection of the rules. However, in rule extrapolation we measure the reverse direction: from the composition/intersection, can the model identify the concepts/rules separately. Importantly, this notion of compositionality is less straightforward than the generally considered direction. - Natural languages are compositional, thus, we expect this property in LLMs if they ought to model these languages well. Therefore, studying rule extrapolation can help us better understand LLMs’ inner workings. - Rule extrapolation allows for studying compositional generalisation ability easily on a variety of datasets, such as formal or programming languages. Therefore rule extrapolation has the potential to become an established benchmark task for evaluating current and future LM architectures. ## WC1 (New paragraph in the background) We have included a new paragraph in the background section about statistical generalisation and identifiability to improve the readability of the paper. ## WC2-3 (typos) Thank you for pointing out the typos, we have corrected them. ## WC4 (Figure 2) On the left, we plotted the log probability, resulting in the rage 0 - (-15). We updated the plot with a label and the caption to clarify this issue. On the right, we intended to show that R2 is learnt first and the language is identified as its subset, which is why we *summed* the probabilities of all sequences obeying only R1, only R2, both R1 and R2, and not R1 nor R2. ## WS2 (xLSTM) We have carried out the suggested experiments with the xLSTM architecture. Due to the limitations of our computational resources, not all languages could be tested with a significant number of seeds, and there was no time for extensive hyperparameter ablations. We aim to update our results in the discussion period. Our results are in Figure 6 (b) in the uploaded document. We see that xLSTM indeed outperforms LSTM, but cannot reach the Transformer on the non-regular languages. ### Hyperparameters for xLSTM: We used the following hyperparameters for xLSTM. Most of these were suggested by the xLSTM github repository, except for num_blocks and xlstm_embedding_dim, which were lowered to match the size of the other models we trained. This model has 185K parameters. ```model: xlstm mlstm_block: mlstm: conv1d_kernel_size: 4 qkv_proj_blocksize: 4 num_heads: 4 slstm_block: slstm: backend: cuda num_heads: 4 conv1d_kernel_size: 4 bias_init: powerlaw_blockdependent feedforward: proj_factor: 1.3 act_fn: gelu num_blocks: 5 xlstm_embedding_dim: 64 slstm_at: [ 1 ] ``` ## WS1 (Parity Experiment) Thank you for this suggestion. We thought about how to best incorporate parity into our framework, and we carried out the following experiment. **Parity extrapolation.** In the anbn language, Rule 1 (#a=#b) is a subset of the rule “even number of tokens in the sequence”. Thus, in order to extrapolate R1 on OOD prompts where R2 is broken, it’s necessary for the model to also understand and extrapolate parity (since #a=#b means the sequence has even length). To understand the relationship between parity and R1 extrapolation, we tested whether failure on R1 extrapolation also means failure in parity extrapolation. Our results **(see the below table)** show that models learnt to extrapolate parity, despite their imperfect R1 extrapolation accuracies. This supports our intuition that parity is an easier task, and R1 extrapolation requires other concepts as well (such as equality). | Model | Test loss | ID R1 | ID parity | OOD R1 | OOD parity | |--------------------------------|------------------------------------|------------------------------------|------------------------------------|------------------------------------|------------------------------------| | Linear | $2.796\scriptscriptstyle\pm 0.171$ | $0.200\scriptscriptstyle\pm 0.000$ | $1.000\scriptscriptstyle\pm 0.000$ | $0.275\scriptscriptstyle\pm 0.000$ | $1.000\scriptscriptstyle\pm 0.000$ | | LSTM | $0.019\scriptscriptstyle\pm 0.000$ | $1.000\scriptscriptstyle\pm 0.000$ | $1.000\scriptscriptstyle\pm 0.000$ | $0.351\scriptscriptstyle\pm 0.056$ | $1.000\scriptscriptstyle\pm 0.000$ | | Transformer | $0.022\scriptscriptstyle\pm 0.002$ | $1.000\scriptscriptstyle\pm 0.000$ | $1.000\scriptscriptstyle\pm 0.000$ | $0.628\scriptscriptstyle\pm 0.103$ | $1.000\scriptscriptstyle\pm 0.000$ | ## General advice Thank you for pointing out this unclarity in our narrative. We have changed the structure as suggested: section 5 now precedes section 4.1. We would like to thank the reviewer again for their constructive questions and suggestions, which we hope we successfully addressed. --- Rebuttal Comment 1.1: Title: Reply Comment: I thank the authors for their thorough rebuttal and their careful address of my concerns and recommendations. I am very satisfied with it, and thus increase my overall rating to 8, and contribution and presentation ratings to 'good'.
Summary: This paper studies the compositional generalization of auto-regressive large language models with respect to rule extrapolation in formal languages. Both linear and recurrent architectures, including transformers, are compared on the task of inferring the rules that define regular grammars, context-free grammars, and context-sensitive grammars. The paper also presents the theoretical contribution of a normative theory of rule extrapolation, grounded on the idea of the Solomonoff prior, taken from algorithmic information theory. Strengths: + Understanding the compositional generalization capability of large language models is a challenging important task + Interesting theoretical framework insipired by the Solomonoff prior and information theory + Experimental evaluation with different baselines and languages/grammars Weaknesses: - Few theoretical insights to justify the experimental results (see comments below) - One single architecture tested for each model, and one single language for each category Technical Quality: 3 Clarity: 2 Questions for Authors: * The paper makes the conjecture that LSTMs perform better than LLMs on regular languages, because such grammars require comuting parity, which is a notoriously difficul task for LLMs. Did the experimental evaluation consider also different architectures and/or hyper-parameters for the transformer, to exclude that such worse performance depends on the choice of the model? * Similarly, the experimental evaluation is conducted on a single language for each category, and for a single architecture for each model: I wonder whether different architectural choices were tried for different models. * At the end of page 4, in the definition of "Context-sensitive grammar" there is R2 twice, while the first one should be R1. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation of our chosen topic, proposed theory and experiments. We address your questions below: >One single architecture tested for each model, and one single language for each category Our architectures have been selected through initial manual hyperparameter optimisation. However, we have added further new experiments: size, hyperparameter, sampling technique ablations, a new architecture (xLSTM) and a new context-sensitive grammar (non-nested Dyck). Originally, we had 2 languages in each category except in the context-sensitive, but now there are 2 languages in that category, too. Please find the details in the joint response to reviewers, and on the uploaded document. Altogether (including all seeds), we have evaluated 1170 models. >Did the experimental evaluation consider also different architectures and/or hyper-parameters for the transformer, to exclude that such worse performance depends on the choice of the model? - From the additional experiments, it can be seen that increasing the **model size** does not meaningfully improve performance on regular languages; the best values remain those originally used (num_layers = 7, num_heads = 5) (Figure 7 (b)). - Regarding **hyperparameters** such as the optimisation algorithm and the learning rate, when considering the best settings for each architecture, LSTM consistently performs better than the Transformer on regular languages (Figure 7 (a)). - When using **sampling** instead of greedy decoding for next token prediction, LSTM is consistently better than the Transformer in OOD R1 extrapolation (Figure 6 (a)). >Similarly, the experimental evaluation is conducted on a single language for each category, and for a single architecture for each model: I wonder whether different architectural choices were tried for different models. Besides the size, optimisation algorithm, learning rate, and sampling technique ablations, we added a **new context-sensitive language** to ensure there is more than one grammar in each category. The new language is the non-nested Dyck language, where brackets and parentheses do not need to be nested; for example, the sequence ([)] is grammatical here. For the results, please see the joint response. We conclude that this language fits our narrative, specifically that the Transformer architecture is usually the best choice for non-regular languages. Figure X shows (cf. the uploaded pdf) the results: the Transformer and LSTM perform similarly on both OOD R1 and R2 completion. We note that although the Linear model appears to be the best, it is not representative since it predicts only the EOS token, resulting in an empty sequence that obeys both rules. >At the end of page 4, in the definition of "Context-sensitive grammar" there is R2 twice, while the first one should be R1. Thank you for pointing out the typo; we have corrected it. We would like to thank the reviewer for their valuable feedback, we hope that the extra experiments and reframed insights address the reviewer’s concerns. --- Rebuttal Comment 1.1: Title: Rebuttal Comment: I thank the authors for the effort put in the rebuttal and above all for the additional experiments, which strengthen the message of the paper. I have increased my score.
Summary: The article examines systematic generalization in neural networks using artificial grammar learning tasks. Distinctive to this work, the authors operationalize systematic generalization through studying how models extrapolate learned rules to ungrammatical (and thus OOD) seqeunces. Their examination considers a range of models as well as grammars of varying complexity. The article ends with a discussion of algorithmic information theorry and how it relates to OOD generalization. Strengths: The article has a number of strengths: - Distinctive operationalization of compositional generalization - Solid technical methodology - Evaluated a number of different model architectures and grammars - The writing and presentation are clear Weaknesses: The article also has weaknesses: - In my view the section on "Normative theory of OOD prompt completion" didn't contribute much to the article. It's right that this can serve as a normative account of how models should generalize OOD. However the relationship to the current article seemed thin. Instead, the authors could have established normative baselines for their tasks using probabilistic grammar induction and actually run a model to demonstrate a type of normative generalization. EDIT : The author's response helps to address this question. - This is a fine, technically sound article that doesn't strike me as particularly high impact. Technical Quality: 4 Clarity: 3 Questions for Authors: - This statement is just asserted but wasn't obvious to me and could use justification: "In the a^n b^nn language, R2 (a-s before b-s), is, on average, simpler to generate than R1 (#a=#b) and R1 ∩ R2. EDIT : The author's response helps to address this question, and should be included in an updated paper. typos - "an Bob's enemy" pg. 7 Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: This section was fine Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation of our experiments and presentation. Please find below our replies to your concerns. ## RE normative theory We thank the reviewer for their feedback on our normative theory in Section 5. We agree that our theory relates loosely to the rule extrapolation phenomenon discussed primarily in this article. However, we think that even general (normative) theories of OOD prompt completion are limited in the literature, let alone specialised versions that the reviewer is suggesting. As a first step, we wanted to offer a general perspective on how an idealised model should complete OOD prompts, as we felt that general intuition was missing. Therefore, rather than fully explaining the special case of rule extrapolation, our work offers deeper intuition to the nature of OOD prompt completion as choosing the simplest hypotheses consistent with the training data. Our theory builds on Solomonoff induction, the foundation of Bayesian inference, and offers a new, novel way of relating OOD extrapolation into this general framework. Apart from providing foundational understanding, we also relate our normative theory to the dynamics of rule learning in our experiments. We argue that the order in which the rules are learned is governed by the simplicity of the rules. This is in line with existing observations on the simplicity bias of language models (towards low Kolmogorov complexity [Goldblum et al., 2023], and further motivates our approach of building on the general theory of Solomonoff induction. In order to strengthen this connection in our narrative, we swapped Section 4.1 (training dynamics experiment) and Section 5 (normative theory), which makes it explicit that the training dynamics experiment is a verification of the high-level ideas in our normative theory section. ## RE simplicity of rules This high-level statement refers to the Kolmogorov complexities of generating R1 (#a=#b) and R2 (a-s before b-s). The shortest program generating instances following R2 is likely shorter than the shortest program generating instances from R1, since the R2 program can default to outputting only b-s once it generates a b, while the R1 program needs to keep track of the number of a-s and b-s it generates. Furthermore, R1 in itself is a regular language which is accepted by a pushdown automaton, while R2 defines a context-free language which is recognisable by a simpler automata, a finite-state machine. ## RE impact of paper Thank you for your feedback. We realise that we have not fully clarified the significance of our work in our original manuscript. We have expanded on this in our discussion, which we summarise here. While other OOD generalisation types were examined in the literature (Ruoss et al, 2022; Deletang et al. 2022; Ahuja and Mansouri, 2024), this is the first work studying/evaluating rule extrapolation. This novel concept has the potential to impact LLM research both on conceptual and practical levels: - General compositional generalization notions examine whether from learning multiple concepts/rules separately, the model can understand the composition of the concepts/intersection of the rules. However, in rule extrapolation we measure the reverse direction: from the composition/intersection, can the model identify the concepts/rules separately. Importantly, this notion of compositionality is less straightforward than the generally considered direction. - Natural languages are compositional, thus, we expect this property in LLMs if they ought to model these languages well. Therefore, studying rule extrapolation can help us better understand LLMs’ inner workings. - Rule extrapolation allows for studying compositional generalisation ability easily on a variety of datasets, such as formal or programming languages. Therefore rule extrapolation has the potential to become an established benchmark task for evaluating current and future LM architectures. In order to further strengthen the significance of our contribution, we added multiple new experiments (xLSTM, size, hyperparameter, sampling technique ablations, and a new context-sensitive language). Please find the details in [our joint response](https://openreview.net/forum?id=Li2rpRZWjy&noteId=1xbJMvUYnC), and in the uploaded document. ## References M. Goldblum, M. Finzi, K. Rowan, and A. G. Wilson. (2023). The no free lunch theorem, Kolmogorov complexity, and the role of inductive biases in machine learning, URL https://arxiv.org/pdf/2304.05366 K. Ahuja and A. Mansouri. (Feb. 2024) On Provable Length and Compositional Generalization, URL http://arxiv.org/abs/2402.04875 Deletang, G., Ruoss, A., Grau-Moya, J., Genewein, T., Wenliang, L. K., Catt, E., Cundy, C., Hutter, M., Legg, S., Veness, J., & Ortega, P. A. (2022, September 29). Neural Networks and the Chomsky Hierarchy. The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=WbxHAzkeQcn Ruoss, A., Delétang, G., Genewein, T., Grau-Moya, J., Csordás, R., Bennani, M., Legg, S., & Veness, J. (2023). Randomized Positional Encodings Boost Length Generalization of Transformers (arXiv:2305.16843). arXiv. https://doi.org/10.48550/arXiv.2305.16843 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply and additional experiments, which I found helpful. I raised my score.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their constructive feedback and suggestions. We were happy to see that **the reviewers found our topic important, our methodology solid, and our presentation clear**. We supply separate replies to all reviewers, but summarize the common points in our joint response. The main suggestion of the reviewers was to add more experiments. We would like to summarise these here in the joint response. We implemented all the suggested experiments, including a new language, a new architecture, and an extensive hyperparameter search covering model size, optimization algorithm, learning rate, a parity check and next token prediction method. Overall, these experiments confirm our results and narrative. However, due to time and computational constraints, the results are based on a limited number of seeds, and some settings are missing. We will continue these experiments and update the results during the discussion period. If our paper is accepted, we will include all the necessary experiments in the camera-ready version. Currently, altogether (including all seeds), we have evaluated 1170 models. Please find our current conclusions below. ### New context-sensitive language: We added a new context-sensitive language to ensure there is more than one grammar in each category. The new language is the non-nested Dyck language, where brackets and parentheses do not need to be nested; for example, the sequence ([)] is grammatical here. We conclude that this language fits our narrative, specifically that the Transformer architecture is usually the best choice for non-regular languages. **The table below** shows the results: the Transformer and LSTM perform similarly on both OOD R1 and R2 completion. We note that although the Linear model appears to be the best, it is not representative since it predicts only the EOS token, resulting in an empty sequence that obeys both rules. | Model| Test loss | ID R1 | ID R2| OOD R1| OOD R2 completion | |--|--|----|-|-|---| | Linear | $4.013\pm 0.254$ | $0.000\pm 0.000$ | $0.000\pm 0.000$ | $0.000\pm 0.000$ | $1.000\pm 0.000$ | | LSTM | $0.645\pm 0.019$ | $0.981\pm 0.042$ | $0.956\pm 0.061$ | $1.000\pm 0.000$ | $0.894\pm 0.165$ | | Mamba | $0.675\pm 0.018$ | $0.745\pm 0.070$ | $0.807\pm 0.185$ | $0.684\pm 0.159$ | $0.810\pm 0.212$ | | Transformer | $0.640\pm 0.016$ | $1.000\pm 0.000$ | $1.000\pm 0.000$ | $0.980\pm 0.045$ | $0.973\pm 0.044$ | ### xLSTM: As suggested by one of the reviewers, we implemented the latest extension of the LSTM architecture, the xLSTM model. Due to time constraints, we did not conduct a hyperparameter search, however, using a model with approximately the same number of parameters as our other architectures performs in alignment with our narrative. Furthermore, we see that xLSTM indeed outperforms LSTM, but cannot reach the Transformer on the non-regular languages. The results are plotted in Figure 6 (b) (cf. the uploaded pdf), and we will include the outcomes of additional runs with different hyperparameters in our paper. ### Sampling the next token: Our initial results use greedy decoding, but we conducted experiments to evaluate the sampling method for next token prediction. As shown in Figure 6 (a), we conclude that while the Transformer is the best choice with greedy decoding (except for regular languages where LSTM performs better), LSTM appears to excel when using sampling. These results also open up new interesting future directions, e.g., investigating the influence of different temperature values in the softmax. ### Hyperparameters: We tested multiple hyperparameters, including three learning rates and two optimization algorithms, and plotted the results in Figure 7 (a) (cf. the uploaded pdf). Due to time constraints, we could not produce results for all languages, but we will include them in the paper as soon as possible; we plan to do so during the discussion period. Despite this, our claims remain valid: when considering the best settings for each architecture, LSTM consistently performs best on regular languages, while the Transformer excels on everything else. ### Size ablation: As suggested, we tested varying size settings (different numbers of layers and heads) for the Transformer architecture to determine whether increasing size can improve performance on regular languages. As shown in Figure 7 (b) (cf. the uploaded pdf), increasing the Transformer model size does not meaningfully improve performance on regular languages; the best values remain those originally used (`num_decoder_layers = 7, num_heads = 5`). For non-regular languages, the Transformer already outperformed the other architectures. ### **Expanding the discussion:** While reviewers appreciated our discussion on the impact of our work, they suggested expanding it in the Discussion section. We summarise it here. While other OOD generalisation types were examined in the literature, this is the first work studying rule extrapolation. This novel concept has the potential to impact LLM research both on conceptual and practical levels: - General compositional generalization notions examine whether from learning multiple concepts/rules separately, the model can understand the composition of the concepts/intersection of the rules. However, in rule extrapolation we measure the reverse direction. Importantly, this notion of compositionality is less straightforward than the generally considered direction. - Natural languages are compositional, thus, we expect this property in LLMs if they ought to model these languages well. Therefore, studying rule extrapolation can help us better understand LLMs’ inner workings. - Rule extrapolation allows for studying compositional generalisation ability easily on a variety of datasets, such as formal or programming languages. Therefore rule extrapolation has the potential to become an established benchmark task for evaluating current and future LM architectures. Pdf: /pdf/59ef32c792f73c219f7a05c9ae0a61f767d35f41.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Enhancing Protein Mutation Effect Prediction through a Retrieval-Augmented Framework
Accept (poster)
Summary: This paper presents a novel approach for mutation effect prediction using a retrieval-augmented framework. The main contributions involved: * Structure Motif Embedding Database (SMEDB): a vector database storing ESM-IF local structure motif embeddings from experimentally determined protein structures in Protein Data Bank. * Multiple Structure Motif Alignment (MSMA): an approach to efficiently retrieve and filter local coevolutionary motifs from SMEDB. * MSM-IPA: a new model architecture leveraging retrieved coevolutionary motif information to predict the effect of mutations. * The approach was validated on multiple benchmarks for mutation effect prediction and a case study on SARS-COV-2 Antibody Optimization. Strengths: * **Originality**: The use of a retrieval mechanism with vector databases of local structural motifs is a novel strategy for mutation effect prediction, departing from previous methods that rely on MSAs or domain-level structure clustering. * **Quality**: The overall quality of the paper is high. The methodology is clear, leveraging the PDB to create a comprehensive database of local structural motifs and presenting a reasonable retrieval approach, filtering and model architecture. The evaluation is thorough, involving benchmarks on protein stability, binding affinity, and a case study on antibody engineering. * **Clarity**: The paper is well-structured, with a clear explanation of the motivation behind focusing on local structural motifs and providing a detailed discussion of the experiments. Despite missing details and ablations on retrieval hyper-parameter choices, the methods are reasonably well described. * **Significance**: The paper makes significant methodological contributions to mutation effect prediction, with potential for other researchers to build upon. Weaknesses: * **Scalability**: The authors claim their approach is scalable but the paper lacks experimental evidence on the impact of increasing the number of encoded residues in the retrieval database. There's no analysis in increasing/decreasing the database size or including predicted structures from AlphaFoldDB. Predicted structures are used for encoding input proteins in the Novozymes benchmark but not for retrieval. If database size is a significant bottleneck for retrieval, this should be discussed. * **Choice of Embedding Method**: The reliance on ESM-IF for embedding structural motifs is not well justified, and alternative methods are not explored. Discussing how to overcome performance limitations imposed by ESM-IF would strengthen the paper. * **Exploration of Retrieval Hyper-parameters**: The paper lacks details on hyper-parameter choices for the retrieval mechanism. The retrieved motif length (Nretr=16) and the number of retrieved motifs (Lfilter=16) could significantly impact performance but their effect is not studied. The discussions comparing the proposed approach against MSAs should also consider MSA depth vs the number of retrieved local motifs. * **Applicability to Other Mutation Effect Benchmarks**: I found it surprising that the methodology is not tested on ClinVar or ProteinGym which are common mutation effect prediction benchmarks. It should be discussed if the lack of a training set for these datasets is a limitation. **Minor**: * MSM-Mut Description: The paper describes the classification head as a series of MLPs but does not specify the number of layers. Including this information would help reproducibility. * Training and Fine-tuning Details: Limited information on hyper-parameters and procedures for pre-training and fine-tuning. Including specifics like learning rate, batch size, and training time would help other researchers replicate the work. * Extending the Figure 1 legend with more details on the methodology would help understanding. Technical Quality: 3 Clarity: 2 Questions for Authors: * Why were predicted structures not considered to extend the database? Could the authors extend their discussion on scalability and/or include analyses on the effect of increasing/decreasing the size of the underlying dataset used for retrieval? * What was the maximum sequence length used during the encoding of the database and during mutation effect prediction? How did you handle proteins longer than this maximum length? * Did you consider alternative underlying residue embedding methods such as ProteinMPNN? * In the Introduction, you claim that the distribution of local motif embeddings provides complementary information to MSA profiles. Could you clarify the basis for this claim? * Why was the method not benchmarked on ClinVar and ProteinGym datasets? Are there specific challenges or limitations that prevented these evaluations? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper discusses some limitations imposed by third-party modules such as ESM-IF and CUHNSW. However, some areas could benefit from further discussion or stating the limitations on: * Scalability: Discuss how scalable the method is and any potential challenges or limitations on retrieval when increasing the database size. * Benchmarking on Clinvar/ProteinGym: providing reasons for not benchmarking on those datasets and discussing any potential challenges or limitations would give a more comprehensive view of the method's applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Title: Rebuttal Information Comment: **Question 1:MSM-Mut Description: The paper describes the classification head as a series of MLPs but does not specify the number of layers. Including this information would help reproducibility.** To predict the mutation effect, we first pass the information before and after the mutation, along with the corresponding retrieved motifs, through an MSM-IPA to obtain embeddings for the mutation site. Then, we compute the difference between the features before and after the mutation. To ensure the symmetry of the mutation (i.e., the ΔΔG of type 1 to type 2 should be the opposite of type 2 to type 1), we concatenate $feat_{raw} - feat_{mutate}$​ and $feat_{mutate} - feat_{raw}$ ​, and then pass them through a four-layer MLP to predict the mutation result. **Question 2: Training and Fine-tuning Details: Limited information on hyper-parameters and procedures for pre-training and fine-tuning. Including specifics like learning rate, batch size, and training time would help other researchers replicate the work.** During the pre-training on the PDB database, we used the PDB database up to December 31, 2021. In this process, we used the AdamW optimizer with a learning rate of 3e-4, beta1=0.9, beta2=0.999, and a batch size of 128. The pre-training process was conducted on 8 A100 GPUs over five days (300k steps). For the cDNA fine-tuning, we followed the same optimizer and parameters. The fine-tuning process took approximately two hours (4000 steps) on 8 A100 GPUs to achieve the best performance. Training for a longer period led to overfitting on the cDNA dataset, which negatively affected the model's generalizability. **Question 3: Extending the Figure 1 legend with more details on the methodology would help understanding.** Subgraph (a) illustrates the process of Multiple Structure Motif Alignment. First, we use ESM-IF to obtain embeddings for all amino acids in the PDB database. The size of this library is approximately 500 GB, making it impractical to use standard retrieval methods. To improve retrieval efficiency, we index this data using CUHNSW, constructing a Motif Embedding Database. For each query, we similarly use ESM-IF to obtain embeddings for the query structure, then quickly retrieve the approximate top-k nearest neighbors using HNSW based on the corresponding embeddings. After filtering and superposition, we obtain structure motifs suitable for downstream tasks. For efficiency, we have pre-searched the retrieved motifs from the entire PDB, which also facilitates future tasks. Subgraph (b) illustrates how MSM-IPA uses retrieved structure motifs to enhance the prediction of mutation effects. In the upper part of this figure, the retrieved structure motifs are aligned with the central amino acids of the query structure motif through preprocessing. These features are concatenated and input into MSM-IPA for cross-attention, thereby enhancing the model's ability to predict mutation effects. **Question 4: Why were predicted structures not considered to extend the database? Could the authors extend their discussion on scalability and/or include analyses on the effect of increasing/decreasing the size of the underlying dataset used for retrieval?** There are two considerations for not including predicted structures: quality and efficiency. In terms of quality, we assume that the local structure motifs in the RCSB PDB database are already sufficiently dense, and additional local motifs from predicted structures are also learned from the existing database. In terms of efficiency, the embeddings we currently use with ESM-IF have 512 dimensions, meaning the computational load for retrieval is quite large. Therefore, it is not feasible to increase the dataset from the existing 200k structures to 60M. However, this approach can be attempted in the future by modifying the embedding method and reducing the dimensionality. If the data balance can be managed well, theoretically, better results could be achieved. Here, we present results using a random selection of one-tenth, one-hundredth, and one-thousandth of the database. By randomly selecting 16 motifs from the top 100, 1000, and 10000, we simulate reducing the database to approximately one-tenth, one-hundredth, and one-thousandth of its original size. We observe that in the existing high-quality structure database, the size of the database is quite important as it determines the quantity of high-quality data. Task:S669 | Method | Pearson | RMSE| |----------------------|---------|----------| | MSM-Mut (w/o retrieval) | 0.45 | 1.73 | | MSM-Mut (random in top 10000) | 0.43 | 2.03 | | MSM-Mut (random in top 1000) | 0.52 | 1.57 | | MSM-Mut (random in top 100) | 0.53 | 1.59 | | MSM-Mut (top 16 neighbor)| 0.54 | 1.51 | --- Rebuttal 2: Title: More Rebuttal Information Comment: **Question 5: Did you consider alternative underlying residue embedding methods such as ProteinMPNN?** ESM-IF can potentially be replaced by other methods based on C-alpha point cloud matching or ProteinMPNN. In our future development, we will attempt to train an independent local structure motif encoder to reduce the embedding dimension centered on each amino acid from 512 dimensions to 32 or even lower. This reduction will enable us to incorporate a larger amount of data, including predicted data. **Question 6: In the Introduction, you claim that the distribution of local motif embeddings provides complementary information to MSA profiles. Could you clarify the basis for this claim?** The conservation of MSA in sequences is relatively strong, but when dealing with surface mutation data and predicting the impact of mutations on binding affinity, the profile from sequences tends to be weaker. Our MSM-Profile does not have this issue because, whether it is intra-chain or inter-chain, the local structure motifs are similar and can share information. Below are the performance results of our method and the MSA profile on the SKEMPI2.0 dataset. It can be seen that our profile has a natural advantage in this task. Task: SKEMPI 2.0 | Category | Method | Pearson (P.S.) | Spearman(P.S.) | Pearson | Spearman | |----------------|----------------------|---------|----------|---------|----------| | Profile |MSA-Profile | 0.0826 | 0.0822 | 0.0159 | 0.0666 | | | MSM-Profile | 0.1551 | 0.1766 | 0.1433 | 0.1739 | | | MSM-Profile(Filtered) | 0.1905 | 0.1886 | 0.1653 | 0.2063 | Additionally, we found that on the s669 dataset, although the distributions of our MSM-Profile and MSA-Profile are similar, simply adding the two profiles results in a significant increase in Pearson correlation. This indicates that there is a portion of the information in both profiles that is independent. Therefore, we can enhance current tasks that use MSA by incorporating MSM-Profile, which may improve the model's performance. Task: S669 | Method | Pearson | |----------------------|---------| | MSA-Profile | 0.17 | | MSM-Profile | 0.19 | | MSA-Profile + MSM-Profile | 0.23 | **Question 7: Why was the method not benchmarked on ClinVar and ProteinGym datasets? Are there specific challenges or limitations that prevented these evaluations?** In this paper, we followed the setting of the Stability Oracle for benchmarking models predicting mutation effects on stability. Within the Stability Oracle test sets, we selected the latest and most challenging dataset, s669, as our benchmark dataset. During stability fine-tuning, a significant portion of our training set, the cDNA dataset, overlaps with the ProteinGym test set. Removing these overlapping data points may weaken our benchmark results on ProteinGym; hence, we did not conduct tests on this dataset. To demonstrate our model's capability, we show significant improvements on the s669 dataset, which has no overlap with the cDNA dataset. This indicates that our model performs well. The data on ClinVar is more focused on binary classification, determining whether a mutation is pathogenic. Since both our pre-training and fine-tuning datasets do not match the ClinVar setting, it is challenging to compare our model's performance on the ClinVar dataset. In the future, we will attempt to create a new dataset based on our current model. However, our method does not consider sequence co-evolution information (MSA), so a more valuable comparison would be between MSA Profile and our MSM-Profile. We plan to conduct this experiment in future work. --- Rebuttal Comment 2.1: Comment: Thank you for your comprehensive rebuttal. The revisions have addressed various concerns and enhanced the manuscript quality with the additional content. I encourage the authors to add the content of this discussion to the main manuscript or Appendix. --- Reply to Comment 2.1.1: Comment: Thank you very much for your positive feedback and for recognizing the additional experiments and detailed analysis we provided in the rebuttal. We are grateful for your thoughtful suggestions and are pleased to hear that our revisions have addressed your concerns and improved the overall quality of the paper. We appreciate your recommendation to include the content of this discussion in the main manuscript or Appendix. In response, we will thoughtfully incorporate the key points and additional details from the rebuttal into the Appendix of the manuscript. This will ensure that the insights we discussed are thoroughly documented and accessible to readers, thereby further enhancing the contribution of our work. Once again, we sincerely thank you for your valuable suggestions and guidance throughout this process.
Summary: The paper presents a novel retrieval-augmented framework for enhancing the prediction of protein mutation effects, which is essential for analyzing protein functions and understanding genetic diseases. The authors design a system that incorporates similar structure information from known protein structures into the mutation effect prediction process. Central to this framework is the creation of a vector database, the Structure Motif Embedding Database (SMEDB), which stores embeddings of local structure motifs derived from a pre-trained protein structure encoder. This allows for efficient retrieval of similar local structure motifs during the prediction of mutation effects. Strengths: 1. The authors propose a structure embedding database to retrieve similar protein fragments with similar local structure. 2. This paper develops a novel architecture, MSM-IPA, to predict the structural fitness, which shows superior performance in downstream tasks. 3. Extensive experimental results, including predictions of protein-protein interface mutation effects, case studies in antibody engineering, and applications on protein stability change datasets, validates the effectiveness of this method. Weaknesses: 1. In the field of bioinformatics, it is crucial not only to make accurate predictions but also to understand the reasons behind certain predictions. Enhancing models with interpretability features can help explain the importance of retrieved motifs for each prediction, which could be valuable. 2. A more thorough comparative analysis with existing methods, especially those that also focus on local structure motifs, could be beneficial for this paper. Expanding the discussion on how the proposed framework differs from and improves upon these methods will strengthen the novelty claims of the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Regarding the Retrieved Structure Motif Filter, could you explain why 16 motifs were chosen and whether this number is sensitive to performance variations? 2. Can the authors elaborate on how the performance and limitations of the ESM-IF and CUHNSW modules impact the overall framework? How were these modules selected, and were alternative options considered? 3. How does the model handle proteins with low sequence similarity or sparse structural motifs? 4. In Equation 2, why is the distance calculated with the 0th alpha carbon atom? Does this represent the alpha carbon atom of the central amino acid? 5. What impact does pre-training have on the predictive ability of the model? How would this method perform without pre-training? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: The paper acknowledges certain limitations, but it could expand on these by discussing potential weaknesses in greater detail and suggesting more specific directions for future research. This might include exploring different protein structure encoders, expanding the database, or adapting the method for different types of mutations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Comment: **Question 1: In the field of bioinformatics, it is crucial not only to make accurate predictions but also to understand the reasons behind certain predictions. Enhancing models with interpretability features can help explain the importance of retrieved motifs for each prediction, which could be valuable.** Thank you for your valuable suggestion. In our paper, we primarily used ESM-IF embedding to create a database and then searched for Multi-Structure-Motifs (MSM) to help predict mutation effects. From the perspective of interpretability, we have demonstrated that using only the top-1 neighbor is beneficial, indicating that retrieving with ESM-IF embeddings can indeed find information useful for predicting mutation effects. We also trained models with different numbers of neighbors and tested them on the s669 dataset. We found that as the number of neighbors increased, the model's predictive ability improved. This suggests that the top-ranked neighbors contain diverse information that can help the model make better predictions. Based on this conclusion, we can infer that the retrieved structure motifs are biologically relevant to mutations and can serve as an effective data source. Task: S669 | Method | Pearson | RMSE| |----------------------|---------|----------| | MSM-Mut (w/o retrieval) | 0.45 | 1.73 | | MSM-Mut (1 neighbor) | 0.49 | 1.62 | | MSM-Mut (2 neighbor) | 0.51 | 1.57 | | MSM-Mut (4 neighbor) | 0.53 | 1.53 | | MSM-Mut (8 neighbor) | 0.53 | 1.55 | | MSM-Mut (16 neighbor)| 0.54 | 1.51 | | MSM-Mut (32 neighbor)| 0.54 | 1.52 | | MSM-Mut (1024 neighbor)| 0.51 | 1.63 | **Question 2: A more thorough comparative analysis with existing methods, especially those that also focus on local structure motifs, could be beneficial for this paper. Expanding the discussion on how the proposed framework differs from and improves upon these methods will strengthen the novelty claims of the paper.** Of traditional methods, mTM-Align[1] introduced the concept of Multiple Structure Alignment (MStrA). As the name suggests, this method excels in traditional template searches as TM-Align. mTM-align is an extension of the pairwise structure alignment program TM-align. However, this method cannot handle local structure information. The most recent work on extracting local structure motifs is MicroMiner[2]. Searching similar local 3D micro-environments in protein structure databases with MicroMiner]. MicroMiner converts protein sequences into multiple k-mers and uses a method similar to MMseqs2 for initial filtering of k-mers from both the original and mutated sequences in the database. Finally, candidates are superimposed and structurally filtered. In comparison, our method has two main advantages: First, MicroMiner heavily relies on sequence similarity, which can be too strict. Our method can identify local structure motifs with similar backbone structures but different sequences, greatly enriching the candidates. Second, MicroMiner can only search continuous sequence segments and requires separate searches for complexes. Our method can directly search surface motifs in the database and use intra-chain motifs to enhance inter-chain mutation prediction. **Question 3: Regarding the Retrieved Structure Motif Filter, could you explain why 16 motifs were chosen and whether this number is sensitive to performance variations?** As shown in the table below, we chose 16 motifs because further increasing the number did not significantly improve performance and might introduce more noise, leading to worse predictions. Task: S669 | Method | Pearson | RMSE| |----------------------|---------|----------| | MSM-Mut (w/o retrieval) | 0.45 | 1.73 | | MSM-Mut (1 neighbor) | 0.49 | 1.62 | | MSM-Mut (2 neighbor) | 0.51 | 1.57 | | MSM-Mut (4 neighbor) | 0.53 | 1.53 | | MSM-Mut (8 neighbor) | 0.53 | 1.55 | | MSM-Mut (16 neighbor)| 0.54 | 1.51 | | MSM-Mut (32 neighbor)| 0.54 | 1.52 | | MSM-Mut (1024 neighbor)| 0.51 | 1.63 | Title: Rebuttal Information --- Rebuttal 2: Title: More Rebuttal Information Comment: **Question 4: Can the authors elaborate on how the performance and limitations of the ESM-IF and CUHNSW modules impact the overall framework? How were these modules selected, and were alternative options considered?** **ESM-IF:** ESM-IF uses the GVP-Transformer to embed all backbone atoms, and this embedding is ultimately used for amino acid type prediction, which is why we chose this approach. We observed that structures retrieved using ESM-IF embeddings mostly resemble the backbone frames of amino acids that are close in the sequence. Please refer to the figure Rebuttal Fig. 2. One issue with ESM-IF is that, although we observed that the neighbors retrieved by ESM-IF have high local backbone similarity, directly using the distance between embeddings for retrieval is a method not defined during training. Therefore, in the future, we plan to train an independent local structure motif encoder using contrastive learning to give this distance metric a clearer meaning. ESM-IF can potentially be replaced by other methods based on C-alpha point cloud matching or ProteinMPNN. In our future development, we will attempt to train an independent local structure motif encoder to reduce the embedding dimension centered on each amino acid from 512 dimensions to 32 or even lower. This reduction will enable us to incorporate a larger amount of data, including predicted data. **CUHNSW:** CUHNSW is an implementation of the classic HNSW (Hierarchical Navigable Small World) algorithm on GPUs, designed to perform approximate nearest neighbor search through a pre-built graph. This approach leverages GPU acceleration to enhance the performance of the search process. CUHNSW can be replaced by other scalable and parallelizable approximate nearest neighbor algorithms, such as LSH (Locality-Sensitive Hashing), KD-trees, or similar methods, depending on the specific task requirements (such as embedding dimensions), database size, and available resources. **Question 5: How does the model handle proteins with low sequence similarity or sparse structural motifs?** In the worst-case scenario, the model may converge to a state with no neighbors, relying entirely on information obtained from pretraining without any retrieval augmentation. We have observed that such data accounts for a relatively small proportion in real PDB databases. However, to enhance the model's capabilities, we will incorporate high-quality predicted structures (suck as AFDB) into the database. To reduce the overall database size, we will perform retrieval for each newly added motif and only add those motifs that do not have very similar neighbors in the database. **Question 6: In Equation 2, why is the distance calculated with the 0th alpha carbon atom? Does this represent the alpha carbon atom of the central amino acid?** The 0th amino acid represents the alpha carbon atom of the central amino acid. In Equation 2, our formula consists of two parts. The first part $1\left[ \min_{j \in N_{\text{retr}}}(\text{dist}(R_{\text{raw}, i}, R_{\text{motif}, j})) < 2.0\text{A}\right]$ attempts to find the closest match for each amino acid in the query motif within the retrieved motif. The second part $\exp(-\|p_{i, C\alpha} - p_{0, C\alpha}\|_2)$ assigns a weight to each amino acid, with the weight decreasing as the distance increases. **Question 7: What impact does pre-training have on the predictive ability of the model? How would this method perform without pre-training?** The table below presents an ablation study comparing the performance differences when retrieval is omitted, pre-training is omitted, or both are omitted. We observed that pre-training an IPA on PDB is crucial. Without pre-training, the model lacks a proper initial distribution for the 20 types of amino acids at masked positions, which negatively impacts subsequent tasks. Task: S669 | Method | Pearson | RMSE| |----------------------|---------|----------| | MSM-Mut (w/o retrieval, w/o pretrain) | 0.37 | 2.82 | | MSM-Mut (w/o pretrain, 16 neighbor) | 0.43 | 2.15 | | MSM-Mut (w/o retrieval) | 0.45 | 1.73 | | MSM-Mut (16 neighbor)| 0.54 | 1.51 | **Reference** [1] Runze Dong, Zhenling Peng, Yang Zhang, Jianyi Yang, mTM-align: an algorithm for fast and accurate multiple protein structure alignment, Bioinformatics, Volume 34, Issue 10, May 2018, Pages 1719–1725 [2] Sieg J, Rarey M. Searching similar local 3D micro-environments in protein structure databases with MicroMiner[J]. Briefings in Bioinformatics, 2023, 24(6): bbad357. --- Rebuttal 3: Comment: During this period, we have conducted additional experiments specifically addressing some of the weaknesses you mentioned. We hope these efforts help resolve the concerns raised. We look forward to hearing your feedback. **1. Exploring different protein structure encoders** To provide a better comparison, we tried a simpler encoding method: we defined the $\phi$ and $\psi$ angles corresponding to the amino acid i as $\phi_i$ and $\psi_i$. For an amino acid, we defined its embedding as [$\phi_{i-4}, \psi_{i-4}, ..., \phi_i, \psi_i, ..., \phi_{i+4}, \psi_{i+4}$], an 18-dimensional vector where each value ranges from [-pi, pi]. This vector consists of the angles corresponding to this amino acid and the four consecutive amino acids before and after it in the sequence. We defined the distance between two embeddings as their Manhattan distance. This method is referred to as the Continuous Backbone Angle Embedding (CBAE). This embedding method ensures that the retrieved structures have similar local sequence backbone structures. Using such retrieving methods, we tested the performance on SKEMPI and S669 datasets as follows: Task: SKEMPI 2.0 | Method | Pearson (P.S.) | Spearman(P.S.) | Pearson | Spearman | RMSE | MAE | |----------------------|---------|----------|---------|----------|--------|--------| | MSM-Mut (w/o retrieval) | 0.4325 | 0.4031 | 0.6233 | 0.4954 | 1.6076 | 1.2155 | | MSM-Mut (CBAE retrieval) | 0.4619 | 0.4262 | 0.6524 | 0.5158 | 1.5531 | 1.1622 MSM-Mut | 0.4736 | 0.4354 | 0.6814 | 0.5786 | 1.4703 | 1.0212 | Task: S669 | Method | Pearson | RMSE| |----------------------|---------|----------| | MSM-Mut (w/o retrieval) | 0.45 | 1.73 | | MSM-Mut (CBAE retrieval) | 0.52 | 1.55 | | MSM-Mut | 0.54 | 1.51 | Our experiments showed that structures retrieved using ESM-IF performed better than this simpler method. ESM-IF can capture non-adjacent backbone atom positions to some extent. However, the differences between these methods are limited because we observed that structures retrieved using ESM-IF embeddings also ensure a high degree of backbone similarity. **2.Expanding the database** Given the constraints of time, it is challenging to incorporate the entire set of predicted structures into our analysis. We are currently working on reducing the dimensionality of the embeddings through contrastive learning. However, the size of the entire AFDB dataset is substantial. To illustrate the impact of dataset size on model performance, we present results based on random selections of approximately one-tenth, one-hundredth, and one-thousandth of the database. Specifically, we randomly selected 16 motifs from the top 100, 1000, and 10,000 motifs, simulating a reduction of the database to approximately one-tenth, one-hundredth, and one-thousandth of its original size. Our observations indicate that within the existing high-quality structure database, the size of the dataset is indeed critical, as it significantly influences the availability of high-quality data. Task:S669 | Method | Pearson | RMSE| |----------------------|---------|----------| | MSM-Mut (w/o retrieval) | 0.45 | 1.73 | | MSM-Mut (random in top 10000) | 0.43 | 2.03 | | MSM-Mut (random in top 1000) | 0.52 | 1.57 | | MSM-Mut (random in top 100) | 0.53 | 1.59 | | MSM-Mut (top 16 neighbor)| 0.54 | 1.51 | **3. Adapting the Method for Different Types of Mutations** In the manuscript, we have already tested our method on several datasets, including SKEMPI2.0 (general surface mutation ΔΔG prediction), SARS-CoV-2 (antibody binding affinity prediction), S669 (mutation effects on stability), and novo-enzyme mutation effects on stability. These datasets cover a broad range of mutation effect prediction scenarios. If there are additional datasets or settings where you would like to see our method evaluated, we would be more than happy to provide those results. We have addressed the key weaknesses you pointed out in our paper. If these responses adequately resolve your concerns, we kindly ask you to consider slightly adjusting your evaluation. We are also open to further discussing any remaining issues you might have. --- Rebuttal Comment 3.1: Comment: Thanks for the detailed rebuttal. I have raised the score.
Summary: - The paper presents a novel retrieval-augmented framework to efficiently retrieve similar local structure motifs in protein sequences for mutation effect prediction - Current methods to understand coevolutionary patterns include MSA and domain-level structure clustering, which serve as a global representation of the protein and its evolutionary couplings - The paper introduces Structure Motif Embedding Database (SMEDB), constructed from the ESM-IF structure-based embedding approach, to enable rapid GPU-accelerated kNN search and retrieve local structure motifs similar to those affected by mutations - The retrieval method Multiple Structure Motif Alignment (MSMA) leverages embeddings that capture local coevolutionary patterns Multi-Structure Motif Invariant Point Attention (MSM-IPA) model aggregates retrieved coevolutionary motif information to predict changes in binding free energy (G) on protein surfaces to assess protein-protein interactions Strengths: - The main strength of this paper is its novelty and originality of the idea. It is intuitive to look for information about the effects of mutations locally in the protein structure - The search approach identifies similar local structure motifs from structurally unrelated proteins which is interesting independently - The paper demonstrate MSM-IPA’s performance on the benchmark datasets compared to existing methods, which shows how the retrieval of information provides an advantage in predicting mutation effects on protein-protein interactions - The paper shows the real-world utility of MSM-IPA by optimizing antibody engineering for SARS-CoV-2 and predicting enzyme thermostability for a novel enzyme Weaknesses: - The main weakness of the work is in the evaluations. - It is difficult from Table 2 to conclude the competitive performance of MSM-Mut as other DL-based methods are performing better? - In Table 2, I did not understand the rationale for making some numbers bold. If the lowest number is better, then why are all numbers bold? - It would make reading the paper much easier if Tables were referred to in the paper. - Several important mutation prediction baselines, such as DeepSeqeice and EVE, are missing from Table 1. These and others are outlined and now standardized in the ProteinGym benchmark paper. - Without a large-scale evaluation, it would be hard to assess the utility of the method and fully evaluate the contributions - The interpretability of MSM-IPA’s predictions might be obscured because it incorporates multiple structure motifs - Figure 3 could benefit from some labels for the reader to identify T chain 576, 5KOV, H chain 103, and 7FAE Technical Quality: 4 Clarity: 2 Questions for Authors: 1. Given that ESM-IF is trained on "global" protein structure and not local, why is it intuitive to use ESM-IF as a tool to embed local structures? 2. Would a stand-alone embedding module trained entirely on local structures perform better as an alternative to ESM-IF within the MSM-IPA framework? Confidence: 5 Soundness: 4 Presentation: 2 Contribution: 2 Limitations: As discussed earlier in the weakness section, the main limitation of the paper is evaluation which currently do not fully support the main claim. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1: It is difficult from Table 2 to conclude the competitive performance of MSM-Mut as other DL-based methods are performing better?** We apologize for any difficulty in concluding the competitive performance of MSM-Mut from Table 2. Our primary intention with this table was to demonstrate two key points. Firstly, despite employing a relatively simple model architecture, incorporating the structure motifs retrieved led to a notable improvement in the model's accuracy for predicting mutations on SARS-CoV-2. Our method achieves performance comparable to state-of-the-art models and reaches top 20% levels for four of the five beneficial mutations. Secondly, we observed that most DL-based models, including our non-retrieval version, tend to downplay the significance of the RH103M mutation in enhancing binding. However, with the incorporation of retrieved information, the importance of this binding increased significantly, placing it in the top 20%. This observation supports our subsequent case study on the RH103M mutation, highlighting the crucial role of retrieved structure motifs in accurately predicting binding ΔΔG. **Question 2: In Table 2, I did not understand the rationale for making some numbers bold. If the lowest number is better, then why are all numbers bold?** We apologize for the excessive bolding in the table. Here, we chose to bold all mutations ranked in the top 100. Note that there are a total of 494 mutations, so we roughly selected the top 20% of mutations to bold. This is because, in practical protein engineering processes, the workflow typically involves using deep learning models to screen for potentially top-ranking mutations, which are then sent to the laboratory for binding affinity measurements. Therefore, in our setting, we consider the top 20% of mutations as the set that can be forwarded to downstream wet lab testing, and any favorable mutation ranking within this part can be successfully detected. **Question 3: It would make reading the paper much easier if Tables were referred to in the paper.** Thank you for your valuable feedback. We apologize for not providing enough references to the tables in the paper. We agree that referring to the tables within the text will enhance the readability of the paper. We will ensure that all tables are appropriately referenced in the revised manuscript. **Question 4: Several important mutation prediction baselines, such as DeepSeqeice and EVE, are missing from Table 1. These and others are outlined and now standardized in the ProteinGym benchmark paper.** Our tables in the paper already include the performance of most model types on the SKEMPI2.0 dataset. Since SKEMPI2.0 is a dataset for predicting the ΔΔG of surface mutations, alignment-based sequence methods like DeepSequence and EVE tend to perform relatively poorly. To provide a more comprehensive comparison, we have added the performance of EVE, ESM2, and supervised ESM2 in the table below. We will update this table in the revised manuscript. Task: SKEMPI2.0 | Category | Method | Pearson (P.S.) | Spearman(P.S.) | Pearson | Spearman | RMSE | MAE | |----------------|----------------------|---------|----------|---------|----------|--------|--------| | Unsupervised | ESM2 | 0.0100 | 0.0100 | 0.1700 | 0.1630 | 2.6580 | 2.0210 | | | EVE | 0.1131 | 0.0898 | 0.1237 | 0.1088 | 2.2622 | 1.4178 | | Supervised | ESM2(Sup) | 0.3330 | 0.3040 | 0.6030 | 0.5290 | 2.1500 | 1.6700 | **Question 5: Without a large-scale evaluation, it would be hard to assess the utility of the method and fully evaluate the contributions** In the field of ΔΔG prediction for surface mutations, the most commonly used dataset is SKEMPI2.0, which includes various types of protein-protein interface mutation data such as antibody-antigen (AB/AG), protease-inhibitor (Pr/PI), and T-cell receptor-major histocompatibility complex (TCR/pMHC). For the stability prediction, we followed the approach from the Stability Oracle[ref], which represents the current state-of-the-art for structure-based methods. This work used the cDNA dataset[ref] for pretraining and tested on the s669 dataset. We chose s669 as our stability benchmark because it was the most challenging dataset in the Stability Oracle study, with a Pearson correlation of only 0.52. Therefore, we aimed to optimize our method on this difficult dataset. We did not evaluate our method on the large-scale ProteinGym dataset because our method requires a pretraining dataset to enable MSM-IPA to integrate neighboring information. The cDNA pretraining dataset overlaps significantly with the ProteinGym test set. Removing the overlapping training data would substantially reduce the size of ProteinGym. Additionally, the data in ProteinGym are relatively simple, so we did not choose it for testing. **Question 6: The interpretability of MSM-IPA’s predictions might be obscured because it incorporates multiple structure motifs** We have provided the performance results using only one neighbor, which also shows significant improvement. This indicates that the most similar local structure motif we selected can serve as a potential post-mutation structure, contributing to the improved results. We aim to demonstrate through MSM-IPA experiments that our retrieval database contains such valuable information, which can be utilized beyond just MSM-IPA. Task: S669 | Method | Pearson | RMSE| |----------------------|---------|----------| | MSM-Mut (w/o retrieval) | 0.45 | 1.73 | | MSM-Mut (1 neighbor) | 0.49 | 1.62 | | MSM-Mut (16 neighbor)| 0.54 | 1.51 | --- Rebuttal 2: Title: More Rebuttal Information Comment: **Question 7: Figure 3 could benefit from some labels for the reader to identify T chain 576, 5KOV, H chain 103, and 7FAE** In Figure 3, blue represents 5KOV and green represents 7FAE. The red segment is 5KOV's T chain 574-578, and the yellow segment is 7FAE's H chain 101-105. To help understand this, we extracted the local structure motifs (Rebuttal Fig. 1). Despite overall differences, their local backbone atoms and surrounding non-charged polar amino acids like serine and asparagine are very similar. **Question 8: Given that ESM-IF is trained on "global" protein structure and not local, why is it intuitive to use ESM-IF as a tool to embed local structures?** We chose ESM-IF for embedding because, although the embedding is calculated based on global structure, it is used for protein inverse-folding, which is sensitive to local structure. Preliminary case studies showed that ESM-IF embeddings help retrieve structures with similar nearby backbone frames. See Rebuttal Fig. 2 in the supplementary PDF. **Question 9: Would a stand-alone embedding module trained entirely on local structures perform better as an alternative to ESM-IF within the MSM-IPA framework?** Firstly, we aim to demonstrate that our method outperforms simple local backbone encoding. To compare, we tried a simpler method: for an amino acid, we defined its embedding as [$\phi_{i-4}, \psi_{i-4}, ..., \phi_{i+4}, \psi_{i+4}$], an 18-dimensional vector of angles for this amino acid and its four sequential neighbors. We call this embedding Continuous Backbone Angle Embedding (CBAE). This method ensures similar sequential-local backbone structures. We tested performance on the SKEMPI and S669 datasets: Task: SKEMPI 2.0 | Method| Pearson (P.S.) | Spearman(P.S.) | Pearson | Spearman | RMSE | MAE| |----|---|----|-----|-----|-----|---| | MSM-Mut (w/o retrieval) | 0.4325| 0.4031| 0.6233| 0.4954 | 1.6076 | 1.2155 | | MSM-Mut (CBAE retrieval) | 0.4619|0.4262 |0.6524 |0.5158 |1.5531 |1.1622 | |MSM-Mut | 0.4736| 0.4354 | 0.6814| 0.5786 | 1.4703 | 1.0212 | Task: S669 | Method| Pearson | RMSE| |--|---|--| | MSM-Mut (w/o retrieval) | 0.45| 1.73 | | MSM-Mut (CBAE retrieval) | 0.52| 1.55 | | MSM-Mut | 0.54| 1.51 | Our experiments showed that ESM-IF performed better than this simpler method, capturing non-adjacent backbone atom positions. Training a good pre-trained local structure encoder is still an open question. Different tasks may benefit from task-specific embeddings. For our mutation effect prediction, we used embeddings from the commonly used ESM-IF model. --- Rebuttal Comment 2.1: Title: Thanks for the notes Comment: I thank the authors for the notes. My main criticism regarding choosing a global embedding (ESM-IF) to obtain local information is the difficulty of interpretability and lack of comparison with baseline stands. At this point, I cannot be more positive than a score of 5. I hope the comments help improve the paper. --- Reply to Comment 2.1.1: Comment: We sincerely appreciate the reviewer’s valuable suggestions. Regarding your concerns about interpretability and the lack of baseline comparisons, we would like to provide the following clarifications: **Interpretability**: First, we would like to explain why we chose ESM-IF, a global embedding, to model local information. Our motivation stems from the fact that the Protein Data Bank (PDB) is sufficiently large, leading us to believe that local structures of proteins are relatively dense. Thus, we aimed to directly construct a local-structure-based profile that could model the effects of mutations, particularly those on the protein surface, something that conventional MSA profiles cannot achieve. ESM-IF is trained on an inverse folding task, where the internal embeddings for each amino acid align well with our task. Therefore, we hypothesized that the similarity between these embeddings could, to some extent, quantify the similarity between local structures. Additionally, we have provided evidence of the validity of the information retrieved by our method. In Rebuttal Fig. 2, we display the retrieved structures, showing that the local motifs with the most similar embeddings indeed share highly similar backbone structures. **Baseline Comparisons**: We have already added several baseline experiments on the SKEMPI and S669 datasets, as mentioned earlier in the rebuttal. We also explained the reasons for not selecting the two large-scale datasets you mentioned. Currently, we are conducting tests of our method and baselines on a sub-dataset of ProteinGym that has no overlap with our fine-tuning training set. We will include these results in the appendix of the manuscript. We hope that these clarifications address your concerns. If you have any further questions, we would be more than happy to continue the discussion with you.
null
null
Rebuttal 1: Rebuttal: Thank you for your insightful and constructive comments as well as your appreciation of our work. Below are some clarifications and answers to your questions. If our response does not fully address your concerns, please post additional questions and we will be happy to have further discussions. As we are not able to update the new draft during the rebuttal stage, we promise to update our draft based on your suggestions if accepted. Pdf: /pdf/00421429631fdd37a668663b074c1ad095437f52.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Can LLMs Implicitly Learn Numeric Parameter Constraints in Data Science APIs?
Accept (poster)
Summary: In data science library APIs, there is a numerical parameter constraint between input data and parameters. This paper presents an empirical study on whether large language models (LLMs) can implicitly learn numerical constraints in data science library APIs. The findings indicate that LLMs demonstrate the ability to memorize common patterns in the use of popular data science APIs, and their performance significantly declines when the rank or dimensions of the input data increase, suggesting a general lack of true understanding of the API's numerical constraints. Additionally, the paper reports several other findings. It also introduces a dataset named DSEVAL, designed to evaluate LLMs' capabilities in understanding numerical API constraints for some popular data science libraries. Strengths: 1. The paper explores a new and interesting problem regarding whether LLMs comprehend the numerical parameter constraints between input data and parameters in Data Science library APIs during the code generation process. 2. The study is thoroughly conducted, encompassing three scenarios (Full program, all parameters, Individual parameter) and various series, different sizes of LLMs such as GPT-4-Turbo, DeepSeek, CodeLlama, StarCoder, and CodeQwen 1.5. 3. It contributes a new dataset designed to assess LLMs’ capabilities in understanding the numerical API constraints for popular Data Science libraries like PyTorch and NumPy, with both the dataset and source code made publicly available. Weaknesses: 1. While the problem of numerical parameter constraints in the data science APIs discussed in this paper exists, it may not be particularly significant and appears to be easily resolvable. This raises concerns about the potential for further research. 2. Some of the experimental details of the paper are unclear and require additional elaboration. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I believe that by incorporating API documentation into fine-tuning or including it within prompts, LLMs might better understand numerical parameter constraints, which could potentially address the issue you're exploring. 2. API parameters can vary across different versions of a library. Could these variations impact the results of your study? 3. Why do you choose to conduct the study with 28 APIs, whereas only 12 APIs are used in the DSEVAL dataset? What accounts for this difference? 4. The specific LLM used during input creation is not mentioned. Could you clarify which LLM is employed? 5. In Figure 5, why do different sets of APIs get affected as the input rank and dimensionality increase? Furthermore, how do rank and dimension affect the complexity of these APIs? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have not dedicated a separate section to discussing the limitations; however, they consider the limitations to be associated with the scope of their study. I consider this sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question-1: I believe that by incorporating API documentation into fine-tuning or including it within prompts, LLMs might better understand numerical parameter constraints, which could potentially address the issue you're exploring.** Thanks for this suggestion. We conducted additional experiments using the documentation-augmented setting across the 3 difficult API/constraints used in our CoT experiments. We provide the raw documentation of each API (obtained from the source code docstring) in the prompt and apply both base and instruction-following LLMs. In Figure 14 (see attached PDF in global response), we compare the performance with and without documentation. We found that there are cases where documentation can improve performance. For example, in the most difficult setting of `torch.nn.Conv2d`, adding documentation is able to improve performance of CodeLlama-34B-Instruct from 20% to 45% accuracy (Figure 14 (d)). However, there are also similar cases where adding documentation decreases performance. For example the GPT-4-Turbo-Instruct performance falls from 57.5% to 22.5% in the most difficult setting of `torch.nn.MaxPool2d` (Figure 14 (b)). Since we provide the raw documentation text without further processing, the success rate of adding documentation can vary depending on the specific model as well as the quality of the documentation. As such, this demonstrates that naively adding API documentation cannot always achieve better performance on our tasks. Again, we want to emphasize that we investigate the LLMs’ ability to implicitly reason about such constraints, given that LLMs have been trained on massive parameter combinations. Our findings in the paper shows that current LLMs cannot implicitly reason with more complex API constraints or unusual inputs. This can inspire many follow-up works on improving LLM performance, including the reviewer’s suggestion of incorporating API documentation or performing further fine-tuning. **Question-2: API parameters can vary across different versions of a library. Could these variations impact the results of your study?** Great question! Indeed, some APIs are not stable and their parameters and constraints could vary across versions. However, in this study we focus on the core functional APIs which should remain stable across library versions. We manually examined all 28 APIs across a 2 year period of major releases and found that only one API (`torch.squeeze`) received a small update to its parameter (removing the optional `out` parameter used to store the output tensor) while its numerical parameter constraints stayed the same. Therefore, we believe the API parameter variations across library versions have minimal impact on our study. **Question-3: Why do you choose to conduct the study with 28 APIs, whereas only 12 APIs are used in the DSEVAL dataset? What accounts for this difference?** Our 28 APIs are obtained by first selecting the 22 core APIs that have numeric parameter constraints in NNSmith, and adding additional 6 core APIs by examining the API list. Our chosen APIs are the core API commonly used by users. While there are a large number of APIs in DS libraries, the commonly used ones (e.g., Conv2d) are not that many. For example, the widely-used benchmark DS-1000 [29] contains 1000 DS problems collected from StackOverflow problems (including 68 PyTorch problems and 220 NumPy problems), reflecting realistic use cases; meanwhile, it only covers 16 PyTorch APIs and 59 NumPy APIs (after excluding data construction APIs like “np.ones”). Furthermore, some APIs have similar constraints or same constraint types. For example, `numpy.max` and `numpy.min` have the same constraints, whereas `torch.nn.Conv2d` and `torch.nn.Conv1d` have very similar constraints. Therefore, in our DSeval benchmark, we select 12 representative APIs to keep the experiments at an affordable scale, while still covering all major constraint categories in Table 1. We will clarify this further in the next revision of the paper. **Question-4: The specific LLM used during input creation is not mentioned. Could you clarify which LLM is employed?** Sorry for the confusion. We do not use LLMs during input creation. Instead, we have a template-based prompt and we generate random input values using built-in random number generators. We then use the Z3 constraint solver to filter out the invalid inputs and ensure that every input has at least one valid answer. For more details, please refer to Section 2.3. **Question-5: In Figure 5, why do different sets of APIs get affected as the input rank and dimensionality increase? Furthermore, how do rank and dimension affect the complexity of these APIs?** Input rank or dimensionality can affect different APIs depending on the type of numeric constraint of each API. Table 1 shows the categorization of the different types of constraints. For example, an API like `torch.nn.SoftMax` that has a constraint of $-rank \leq dim < rank$ will have its difficulty influenced by the actual rank of the input tensor. On the other hand, an API like `torch.nn.Conv2d` has a constraint of $in\underline{ }channels \% groups = 0$, which depends on the actual dimension value of the input (i.e., `in_channels`). As the dimension value of `in_channels` increases, it will be more difficult to select the `groups` parameter that can divide it evenly. Therefore, we increase the difficulty of different APIs based on whether the constraint depends on the rank, dimension, or both. Thanks for the question and we will clarify this further in the next revision of the paper. --- Rebuttal 2: Comment: A quick reminder: Did the author's response adequately address your questions and concerns? Can you please write a brief response to let them know that you read it?
Summary: This paper investigates a problem, precisely the title, can LLMs implicitly learn numeric parameter constraints in data science APIs? To investigate this problem, this paper constructs a benchmark, DSEVAL, which contains a series of API calling based code completion tasks. Then, this paper evaluates several LLMs to derive the conclusion: LLMs can implicitly learn the constraint pattern during training, but lack genuine comprehension of the underlying constraints. Strengths: - The investigated problem is interesting. - The technique is sound. - The writing is good. Weaknesses: - I am not sure whether this paper should be submitted to Dataset & Benchmark track, since this paper focuses on evaluating LLMs’ underlying capabilities for satisfying parameter constraints when calling data science APIs, rather than proposing any novel algorithmic advances, analysis, or applications. - This paper should provide some naïve solutions as the starting point for this problem. However, this paper just provides a comprehensive evaluation on existing LLMs, without proposing any possible solution for this problem. - I think one natural question is that whether LLMs can solve this problem by using ReAct [1] prompt strategy, i.e., each time LLMs call a data science API, LLMs are prompted to first output a thought about the constraint for this API, and then generate the parameter. I am curious about the results about this setting. [1] ReAct: Synergizing reasoning and acting in language models, ICLR 2023. Technical Quality: 2 Clarity: 3 Questions for Authors: See weakness. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: I donot find any discussion about limitation in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question-1: I am not sure whether this paper should be submitted to Dataset & Benchmark track, since this paper focuses on evaluating LLMs’ underlying capabilities for satisfying parameter constraints when calling data science APIs, rather than proposing any novel algorithmic advances, analysis, or applications.** We thank the reviewer for this suggestion. We would like to point out that although there is a separate Dataset & Benchmark track, the main conference still accepts a primary area for evaluation: “Evaluation (methodology, meta studies, replicability and validity)”. Given the wide usage of DS libraries/APIs, automatically synthesizing valid DS programs with LLMs has been a critical research area in order to improve DS development efficiency [29] or improve the reliability of ML systems [16]. Therefore, it is critical to rigorously evaluate whether LLMs can implicitly learn the numeric constraints in data science APIs -- an untested assumption of LLMs’ capabilities. Our paper aims to propose a novel fine-grained analysis for DS API generation by isolating each API call and API parameter generation, as opposed to the previous coarse-grained benchmarks such as DS-1000. We believe our proposed evaluation methodology is novel and would be highly beneficial for the research community to accurately evaluate and analyze the LLMs’ ability in the important domain of DS code generation. Also, we do not aim to propose novel code generation algorithms in this work, but rather to point out interesting results and inspire future work. Some prior work [a,b,c,d] with similar contributions (i.e., discovering important findings/limitations for LLMs) were also published at NeurIPS main track. **Question-2: This paper should provide some naïve solutions as the starting point for this problem. However, this paper just provides a comprehensive evaluation on existing LLMs, without proposing any possible solution for this problem.** We understand that there are strategies at inference time such as ReAct which can boost the performance of LLMs. However, the goal of this paper is to investigate the ability of LLMs to ***implicitly*** model the parameter constraints using zero-shot inference, without adding ***explicit*** reasoning steps such as recalling the constraints with ReAct. Different models may perform differently depending on prompting techniques used, and the performance also highly depends on the wording and in-context demonstrations of the prompt. On the other hand, evaluating them under a naïve zero-shot auto-completion setting provides a fair assessment of the ability of the LLM to implicitly learn the numeric constraints. Meanwhile, please kindly note that we have applied Chain-of-Thought (CoT) prompting strategy to elicit thought steps at **the end of Section 4.3**. More specifically, we evaluate instruction-tuned LLMs using CoT prompting (“Please think step by step”) on multiple particularly difficult DS APIs. We found that although CoT prompting does help improve performance (especially when using state-of-the-art GPT-4-Turbo), it still struggles for more complex arithmetic constraints such as the ones in `torch.nn.Fold` (less than 5% accuracy in more difficult settings). **Question-3: I think one natural question is that whether LLMs can solve this problem by using ReAct [1] prompt strategy.** Thanks for the great suggestion! Although techniques that elicit explicit reasoning at inference time (like ReAct) is not the main focus of this paper, we agree that such an analysis is valuable and can provide additional insights to the goal of this paper. We applied the ReAct prompt strategy on the difficult APIs/constraints studied using CoT in the paper. Our ReAct prompt setup follows the reviewer’s suggestion (i.e., asking the LLM to generate a thought first then the code output). We also provide the LLM with a single demonstration of the ReAct task. Figure 12 (see attached PDF in global response) compares the results of the instruction-following LLMs using CoT versus ReAct as well as their base variants: - We see that for `torch.nn.MaxPool2d`, ReAct prompting generally performs better than CoT especially in more difficult problem settings (e.g., at highest difficulty setting, GPT-4-ReAct: 89.5% GPT-4-CoT: 56.0%). This demonstrates the effectiveness of ReAct in generating thoughts that can help with the correct API parameter generation. - However, for `torch.nn.Conv2d`, ReAct performs similarly to CoT prompting. The reason is that the constraint used in Conv2d is much more complex, requiring factorization. As such, smaller open-source LLMs cannot perform well even with reasoning steps. On the other hand, state-of-the-art LLMs like GPT-4-Turbo show their powerful reasoning abilities by improving the performance over the base variant with both CoT and ReAct. - Although ReAct performs better than CoT for the easier difficulty settings in `torch.nn.Fold`, its performance quickly drops in higher difficulty settings (at best ~5% accuracy with the best GPT-4-Turbo). Overall, this experiment results demonstrate that even more advanced prompting methods such as ReAct still cannot effectively handle more complex constraints. We thank the reviewer again for this suggestion and will work towards adding these results to the new revision of the paper. **References** [a] Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples. https://openreview.net/forum?id=MCVfX7HgPO NeurIPS 2023 (poster). [b] Are emergent abilities of large language models a mirage? https://openreview.net/forum?id=ITw9edRDlD NeurIPS 2023 (oral). [c] Statistical knowledge assessment for large language models. https://openreview.net/forum?id=pNtG6NAmx0 NeurIPS 2023 (poster). [d] Exploring length generalization in large language models. https://openreview.net/forum?id=zSkYVeX7bC4 NeurIPS 2022 (poster). --- Rebuttal Comment 1.1: Comment: Thank the authors for providing the detailed feedback for my concerns. I have read the provided references but I think this paper fails in providing sufficient insights in evaluation, especially compared with [1, a-d]. This paper investigates an interesting yet less significant problem, i.e., whether LLMs implicitly learn the numeric constraints in data science APIs. From my perspective, the answer is predictable: LLMs can implicitly learn simple constraints but fail for complex constraints during pre-training. This paper just verifies this point by (1) constructing a benchmark; and (2) testing the LLMs. However, I expect more interesting and significant insights such as **(a) the potential reasons for this phenomenon**; **(b) some possible initial solutions for this issue during pretraining / finetuning / inference stage**. I think exploring these directions may bring much more contribution to the community. Considering the points above, I do not think this paper provides enough contribution for a top conference like NeurIPS and I decide to keep my score unchanged. [1] Case-Based or Rule-Based: How Do Transformers Do the Math? ICML 2024. --- Rebuttal 2: Comment: Thanks for reviewing our response and providing further feedback. We would like to politely point out that in our initial rebuttal response, we addressed all the reviewer’s initial questions and concerns. This included adding the requested experimental results demonstrating that even advanced prompting techniques like ReAct, cannot handle complex constraints. Please let us know which specific responses in the initial rebuttal that the reviewer still has issues with. Additionally, we also respectfully disagree with the reviewer’s new comments. Please see our detailed responses below: >This paper investigates an interesting yet less significant problem **We believe that evaluating the LLMs’ ability to generate correct and valid DS programs is extremely important**. Data science applications, and by extension DS libraries/APIs, are used daily by developers to process and analyze large amounts of data to build ML systems and make decisions. As such, DS APIs have penetrated almost every corner of modern society in the era of deep learning, including autonomous driving software [9, 27, 45], financial systems [18, 4], and coding assistants [44, 36]. LLMs are being widely used to aid in and generate programs, especially in the important domain of DS code. A recent study [e] has shown that GPT-4 can achieve performance on par with humans on various data analysis tasks. Additional work [f] has demonstrated improved performance when using LLMs to assist data analysts. Furthermore, there has been active research like [g] and the development of automatic tools like Data-Copilot [h] that use LLMs to automatically solve data science tasks. Due to this wide adoption, it is critical to check whether LLMs can implicitly learn the numeric constraints in data science APIs. While fine-grained analysis has been done in the math reasoning domain (e.g., Hu, Yi, et al.’s ICML 2024), none of the prior work has focused on the DS code generation domain. In this work, we advocate for a thorough examination of the assumption -- LLMs can implicitly learn the correct DS API parameter constraints, relied on but not tested by many prior works [46, 21, 16]. Besides practical importance, we would also emphasize the notable distinctions in our approach and findings compared to previous work. **Our problems with DS API parameters are fundamentally different from the synthetic datasets or toy problems studied in prior work like Hu, Yi, et al.’s ICML 2024 work or [a,b,c,d], which we believe provides unique insights and values**. Firstly, our settings do not involve unnatural synthetic problems like linear regression function learning in [Hu, Yi, et al.] that are rarely seen during pre-training. Instead, there are more than 2,400,000/480,000 open-source projects using NumPy/PyTorch [i, j], meaning there exists a massive amount of DS API parameter examples in the pre-training dataset in an exact or very similar format as in our benchmark. Secondly, prior benchmarks typically present problems explicitly (e.g., `1+2=?`), while in our problem the constraints are implicit. This provides a unique testbed and poses an important theoretical question: Given a large set of examples $(\mathbf{X_1},\mathbf{X_2},\mathbf{X_3}, \ldots, \mathbf{X_N})$ where each assignment $\mathbf{X_i}=(x_i^1,x_i^2,...x_i^m)$ satisfies a certain constraint $\phi$ in first-order logic, can a model ***pre-trained*** on large mixture of data including these examples (i.e., $\\{\mathbf{X_i}\\}_{i=1}^N$) implicitly learn $\phi$ and generalize to predict $x_i^j$ when $x_i^{-j}$ is out of distribution? **References** [e] Is GPT-4 a Good Data Analyst? https://arxiv.org/abs/2305.15038 (2023). [f] How Do Data Analysts Respond to AI Assistance? A Wizard-of-Oz Study? https://arxiv.org/abs/2309.10108 (CHI 2024). [g] Large Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering. https://arxiv.org/abs/2305.03403 (NeurIPS 2023). [h] Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow. https://arxiv.org/abs/2306.07209 (2023). [i] https://github.com/pytorch/pytorch/network/dependents [j] https://github.com/numpy/numpy/network/dependents --- Rebuttal 3: Comment: > From my perspective, the answer is predictable: LLMs can implicitly learn simple constraints but fail for complex constraints during pre-training **Despite the seemingly obvious answer, our result reveals multiple surprising findings**. For example, Figure 13 (see attached PDF in global response) shows that for the simple equality constraint: `in_features=input.shape[-1]` in `torch.nn.Linear`, open-source LLMs cannot generalize to out-of-distribution problems (i.e., tensor rank > 4) and have less than 20% accuracy. In contrast, GPT-4-Turbo is able to maintain its performance. One potential reason is that LLMs are mostly trained with data with very common shapes or ranks (e.g., rank=3, 4), leading them to always copy the 3rd/4th shape dimension, while the true rule is to copy the last dimension. As such, LLMs can easily make mistakes on uncommon inputs. For example, when rank=6, they still copy the 3rd or 4th dimension. We also examined the attention weights and found that when LLMs make mistakes for `torch.nn.Linear`, it is usually because they are assigning larger attention weights to incorrect positions, and therefore copying values from a wrong parameter. We believe there can be lots of interesting follow-up works from our fine-grained benchmark and analysis. > (a) the potential reasons for this phenomenon Please also note that we did conduct in-depth analyses based on different problem settings, constraint types (e.g., equality, inequality, arithmetic, and set-related), and across LLMs, and have interesting findings (highlighted in Section 4.1, 4.2, and 4.3). For example, we found that there is a large difference between open source LLMs and state-of-the-art LLM’s performance, especially on complex constraints. Such difference in performance has not been observed previously on standard coding benchmarks (e.g., HumanEval) where the performance gap between open and close source LLMs are considerably smaller. We also have additional interesting findings and analyzed potential reasons regarding the common mistakes made by LLMs, such as: 1. **LLMs struggle with uncommon input tensors**: We found that across many APIs and constraints, LLMs struggle when provided with uncommon input tensor ranks (i.e., rank > 4) or uncommon shapes (e.g., `x = torch.rand(9, 30, 23, 4)`). The reason is that LLMs are mostly trained with data that contains very common shapes or ranks. As such, LLMs can easily make mistakes on uncommon inputs. 2. **LLMs tend to predict common parameter values blindly**: We also observe that LLMs tend to generate common parameter values (e.g., 0, 1, powers of 2) which often turn out to be incorrect. This is again because LLMs are trained with pre-training code that frequently contains such parameter patterns and thus are likely to predict them even given a different input context. 3. **LLMs pay attention to the wrong tokens/irrelevant parameters**: LLMs can learn spurious correlations and pay attention to the wrong context tokens. For example, open-source LLMs struggle with the simple equality constraint `in_features=input.shape[-1]` in `torch.nn.Linear` because the attention weights are focused on the irrelevant parameters. > (b) some possible initial solutions for this issue during pretraining / finetuning / inference stage Please note that we have explored possible initial solutions such as prompting techniques like CoT during the inference stage. We also added the new ReAct and documentation-augmented experiments as suggested by reviewers. We also want to point out that the exact reason for investigating this problem is because the model has been exposed to a large amount of similar data to the problems in our studied settings during the pre-training stage. This led to our goal of studying the ***implicit*** ability for LLMs to satisfy numerical constraints across different constraint types and difficulty levels without any additional fine-tuning or prompting. Additionally, we believe that before exploring additional new techniques to improve performance, building an extensive and high quality benchmark is extremely important. To enable future research and evaluate potential solutions, we constructed a detailed benchmark -- DSeval. We believe this is an essential and crucial contribution, allowing new techniques to be developed and tested in this important domain of generating correct and valid DS code. --- Rebuttal Comment 3.1: Comment: Thanks for your further clarification. However, I still have the following main concerns. **1. The mentioned surprising findings do not seem that surprising for me, which is also the main reason why I think that this paper investigates an interesting yet less significant problem – the key finding in this paper (i.e., LLMs can implicitly learn simple constraints but fail for complex constraints during pre-training) is predictable.** > (1) LLMs struggle with uncommon input tensor. (2) LLMs tend to predict common parameter values blindly. These two findings can be indicated by [1]. LLMs perform case-based reasoning instead of rule-based reasoning: LLMs struggle with uncommon input tensor due to lack of similar case, and LLMs tend to generate common parameter values due to similar case in pretraining. [1] Case-Based or Rule-Based: How Do Transformers Do the Math? ICML 2024. > (3) LLMs pay attention to the wrong tokens/irrelevant parameters. Could you please tell me which part of your paper can lead to this finding? I do not find any attention map visualization in this paper. Please correct me if I am wrong. **2. The assumed potential reasons for these findings should be supported by experiments.** > LLMs struggle with uncommon input tensors: We found that across many APIs and constraints, LLMs struggle when provided with uncommon input tensor ranks (i.e., rank > 4) or uncommon shapes (e.g., x = torch.rand(9, 30, 23, 4)). The reason is that LLMs are mostly trained with data that contains very common shapes or ranks. As such, LLMs can easily make mistakes on uncommon inputs. For example, I think providing the experiments of pretraining/finetuning LLMs with the designed uncommon inputs can support the assumed reasons. **3. The original manuscript only provides CoT as the initial solution, while providing ReAct and documentation-augmentation during the rebuttal. All these three methods are performed during inference stage. How about pretraining/finetuning stage?** - If Figure 6h refers to the CoT results, I think that the legends are wrong? Or could you tell me which line refers to CoT? - I think documentation augmentation can be a strong strategy. But from Figure 14, GPT-4 turbo w/ doc performs poor compared with GPT-4 turbo. This is quite surprising for me. Could you please provide the prompt for GPT-4 turbo w/ doc? Moreover, could you tell me the difference between gpt4-turbo and gpt4-turbo-inst.? I think that gpt4-turbo from OpenAI is already the instruction-following version. --- Rebuttal 4: Comment: We truly appreciate the prompt response! Meanwhile, we noticed some misunderstandings and additional experiment requests which were not mentioned in the original review (so we did not have a chance to address in the rebuttal period). We are happy to respond to them in detail below. >The mentioned surprising findings do not seem that surprising for me, which is also the main reason why I think that this paper investigates an interesting yet less significant problem – the key finding in this paper (i.e., LLMs can implicitly learn simple constraints but fail for complex constraints during pre-training) is predictable. ... These two findings can be indicated by [1]. Please note that in our previous response, we referred to [1] as Hu, Yi, et al.’s ICML 2024 or [Hu, Yi, et al.] and have already discussed the fundamental difference. We will make sure to discuss this related work in our revised manuscript. Again, we would like to point out that our work and [1] study two completely different problem domains: 1. First, different from the **explicit** problems in [1] (e.g., `1+2=?`), in our study, the API constraints are **not** directly specified in the problem. Instead the LLM should **implicitly** learn the constraints through pre-training on large amounts of open-source DS code examples. This makes the fundamental problem setup different between our work and [1]. 2. Second, our problems are also completely different from the synthetic problems (e.g., chicken and rabbit problem) used in [1] that are rarely seen during pre-training. Instead, there are more than 2,400,000/480,000 open-source projects using NumPy/PyTorch, meaning there exists a massive amount of DS program examples in the pre-training dataset in an exact or very similar format as in our benchmark. This makes our study’s starting point completely different from prior work [1]. 3. Third, we would like to point out that there are other works similar to [1] which show completely different findings. For example [1] shows that scratchpad fine-tuning underperforms compared to direct fine-tuning, while another work [a] demonstrates that fine-tuning with scratchpad (especially with few-shot examples) can significantly improve length generalization in the coding domain. Furthermore, we kindly argue that the value of scientific progress and discovery is not measured by the predictability of the final conclusion. In our work, we perform a rigorous study on evaluating the implicit ability of LLMs to satisfy valid numeric constraints in DS programs, which is an extremely important problem in the era of deep learning. We believe that even if some of the main conclusions are predictable, we are the first one to demonstrate this for the important domain of DS code generation. Additionally, our detailed analysis of each LLM’s performance on different problem settings and difficulties, along with our insights, can provide concrete guidelines for improving code LLMs and inspire future work. **References** [a] Exploring Length Generalization in Large Language Models. https://openreview.net/forum?id=zSkYVeX7bC4 NeurIPS 2022 (poster). --- Rebuttal 5: Comment: > (3) LLMs pay attention to the wrong tokens/irrelevant parameters. Could you please tell me which part of your paper can lead to this finding? I do not find any attention map visualization in this paper. Please correct me if I am wrong. We apologize for the confusion. The attention analysis is part of the additional in-depth error analysis we conducted during the rebuttal period following the suggestion of Reviewer xaud. Since we couldn’t provide figures in the comments, we explain the results in text below. We will add the attention visualization as well as more in-depth discussion in our revision. To give an example, for the DeepSeek-1.3B model and the API `torch.nn.Linear` with difficulty of rank=5, the input tensor has 5 dimensions and is presented in context like `x = torch.randn(5, 4, 2, 14, 11)`, and the model is asked to predict a parameter `in_features`, with the constraint being `in_features==input.shape[-1]`. For this simple task, the LLM only achieves 6% accuracy. To investigate why, we compute the attention weight of each input token (maximum of all attention heads across all layers) to locate the most significant token among the relevant ones (e.g., `5, 4, 2, 14, 11` in the previous example). Next, we map it to a specific dimension of the input tensor (i.e., `0, 1, 2, 3, 4`). Here’s the result: Out of the 200 tests, here is the number of times that the predicted value matches each input dimension: `{0: 30, 1: 113, 2: 40, 3: 19, 4: 12}`. The correct answer should be `4`, the last dimension of a rank-5 tensor. Note that the sum of this is not 200 because the predicted value can match more than one input dimensions if they are identical. We observe that the predicted value almost always (98%) matches with the context token that has the highest attention weight. This indicates that the LLM does learn to always copy a specific dimension from the input tensor. As for the detailed break-down: (1) When the attention is paid to the wrong context token/parameter value (92%), it leads to incorrect results; (2) For 1% of the times, it pays attention to the right dimension; (3) Interestingly, for 5% it copies from the wrong position but that specific value just happens to be correct. Please kindly let us know if any further analysis is needed, and we are happy to add that in the next revision. >The original manuscript only provides CoT as the initial solution, while providing ReAct and documentation-augmentation during the rebuttal. All these three methods are performed during inference stage. How about pretraining/finetuning stage? For example, I think providing the experiments of pretraining/finetuning LLMs with the designed uncommon inputs can support the assumed reasons. Again, we want to stress that our goal is to evaluate the ***implicit*** reasoning capability of LLMs in solving numeric constraints (as indicated by our title as well as introduction paragraphs). We totally agree with the reviewer that having training experiments can potentially reinforce and unlock additional insights. However, this is ***outside the scope of our study***. Additionally, the reviewer only asked for the additional ReAct prompting experiment for the rebuttal period, which we did. Due to the time limit and resource cost of the reviewer-author-discussion period we are unable to perform any pre-training or fine-tuning experiments. Furthermore, prior work [b, c, d] with similar contributions (i.e., discovering important findings/limitations for LLMs) also do not include any pre-training or even fine-tuning experiments. We would also like to point out that fine-tuning and pre-training with task-specific synthetic data on small-sized LLMs do not always lead to the same conclusion when done on large state-of-the-art LLMs trained with a mixture of different real-world data. **References** [b] Statistical Knowledge Assessment for Large Language Models. https://openreview.net/forum?id=pNtG6NAmx0 NeurIPS 2023 (poster). [c] Towards Understanding Factual Knowledge of Large Language Models. https://openreview.net/forum?id=9OevMUdods ICLR 2024 (poster). [d] KoLA: Carefully Benchmarking World Knowledge of Large Language Models. https://openreview.net/forum?id=AqN23oqraW ICLR 2024 (poster). --- Rebuttal 6: Comment: >If Figure 6h refers to the CoT results, I think that the legends are wrong? Or could you tell me which line refers to CoT? The dashed lines with triangular markers in Figure 6h refer to the CoT prompting approach when using the instruction-tuned LLMs (as described in line 296-297). We included the complete CoT prompt used by us for all models in Appendix. The solid lines with circular markers refer to the base LLMs (not the instruction-tuned variants), allowing us to compare and contrast their performances. >I think documentation augmentation can be a strong strategy. But from Figure 14, GPT-4 turbo w/ doc performs poorly compared with GPT-4 turbo. This is quite surprising for me. Could you please provide the prompt for GPT-4 turbo w/ doc? Great observation! We would like to point out that adding documentation does not always decrease performance. For example, in the `torch.nn.Fold`, adding documentation is able to improve performance of GPT-4-Turbo across all difficulty levels (Figure 14 (e)). However, like the reviewer pointed out, there are also similar cases where adding documentation decreases performance. For example the GPT-4-Turbo drops in performance when given the documentation for `torch.nn.Conv2d` (Figure 14 (c)). Please note that the success rate of adding documentation can vary depending on the specific model, API studied, as well as the quality of the documentation. For example, although modern LLMs are more and more powerful, they are still far from perfect and cannot always effectively leverage all the information provided. Such “surprising” findings can hopefully also inspire various future work for further improving the ***implicit*** reasoning capability of LLMs. Here is the complete prompt for GPT-4-Turbo for `torch.nn.Fold` with documentation: System Prompt: ``` You are an expert Python programmer and are good at writing correct PyTorch code. ``` Input Prompt: ```` Please refer to the given API documentation and complete the Python program. Documentation for the torch.nn.Fold API: {documentation_omitted_due_to_space} Please complete the program by filling in the correct API parameter(s). You should keep the exact same program unchanged, just fill in the missing code part. ```python import torch x = torch.randn(90, 96, 126) m = torch.nn.Fold(output_size=(15, 10), kernel_size=<insert code here>, dilation=1, padding=0, stride=1) ``` ```` In the above example, we omit the exact document due to the reply word limit. We provide the raw documentation of each API (obtained from the source code docstring). An example can be found here: https://pytorch.org/docs/stable/_modules/torch/nn/modules/fold.html#Fold >Moreover, could you tell me the difference between gpt4-turbo and gpt4-turbo-inst.? I think that gpt4-turbo from OpenAI is already the instruction-following version. Sorry for the confusion, you are right that GPT-4-Turbo is already the instruction-following version. We use *-Inst. to differentiate the generation setting: infilling (GPT-4-Turbo) and free-form generation (GPT-4-Turbo-Inst.) used in our study. For GPT-4-Turbo, we use our infill-specific prompt (See Appendix F for an example) to ask it to **only fill in** the missing code without adding any additional text. This setup allows us to compare against other infilling LLMs in the same setting. On the other hand, for GPT-4-Turbo-Inst., we allow it to generate additional text (such as CoT or ReAct reasoning steps). Please note that GPT-4-Turbo-Inst. is only used in our CoT experiments in the original paper. Thanks again for the question, and we will definitely have better naming for the next revision to avoid such confusion. --- Rebuttal Comment 6.1: Comment: Thanks for your further reply. I want to clarify some key points. **1. I do not think the raised concerns are clear misunderstandings.** I mention the reference [1] to demonstrate why I think this paper does not discover exciting findings for me. I didn’t claim that this paper and [1] investigate similar problems. [1] Case-Based or Rule-Based: How Do Transformers Do the Math? ICML 2024. **2. I didn’t request additional experimental results during author-reviewer discussion. Also, the concern about initial solutions is already included in my initial review (See Weakness 2).** The mentioned experiments are the reasons why I think this paper does not provide sufficient insights about potential reasons for the phenomenon. Although I acknowledge that the authors have made efforts during the rebuttal period, **from my perspective**, this paper does not meet the high standards expected at NeurIPS and my concerns about the contribution remain unresolved, so I will maintain my original score. --- Reply to Comment 6.1.1: Comment: We are very sad to hear that the reviewer believed our effort spent during multiple responses and added experimental results were not sufficient. We have made every effort possible to address all the main concerns raised, and hope the reviewer can also look at the other positive reviews and reconsider. Below are our answers: >I do not think the raised concerns are clear misunderstandings. I mention the reference [1] to demonstrate why I think this paper does not discover exciting findings for me. I didn’t claim that this paper and [1] investigate similar problems. If the reviewer believes that [1] and our work do not investigate similar problems then how does the finding of [1] affect the contribution of our work? Just like our previous replies, we again point out that scientific discovery is not based on whether something is “exciting” or can be easily “predicted”. The prediction (i.e., hypothesis) to be investigated needs to be rigorously shown and proven through experiments. We believe that our work is an extensive evaluation of the implicit ability of LLMs to satisfy valid numeric constraints in DS programs, which is an extremely important problem in the era of deep learning. > I didn’t request additional experimental results during author-reviewer discussion. Also, the concern about initial solutions is already included in my initial review (See Weakness 2). You did. The original review did not mention anything about fine-tuning or pre-training, even including Weakness 2. In fact the original review only requested a new experiment on ReAct in Weakness 3: *“I think one natural question is that whether LLMs can solve this problem by using ReAct [1] prompt strategy, i.e., each time LLMs call a data science API, LLMs are prompted to first output a thought about the constraint for this API, and then generate the parameter. I am curious about the results about this setting.”*. We did add this ReAct experiment and demonstrate interesting results, but then the reviewer completely ignored our new ReAct results, and asked for new experiments on fine-tuning/pre-training, e.g., in the new [comment on Aug 11](https://openreview.net/forum?id=LfC5rujSTk&noteId=rrQOuQhfQa): *“The original manuscript only provides CoT as the initial solution, while providing ReAct and documentation-augmentation during the rebuttal. All these three methods are performed during inference stage. How about pretraining/finetuning stage?”*.
Summary: The authors systematically investigate how well current LLMs learn numeric parameter constraints of functions and deep learning operators in the Numpy and PyTorch libraries. Their main finding is that although it is widely assumed that current LLMs can solve arithmetic constraints, the performance of even state-of-the-art LLMs like GPT-4-Turbo drops drastically when the complexity of the constraints increases, and accuracy sometimes even drops to 0. Another interesting finding is that there is a huge gap between open source models and GPT-4-Turbo, which is not captured by benchmarks like HumanEval. The authors introduce a public benchmark called DSeval based on these findings which demonstrates this gap and allows future research to measure and narrow it. Strengths: The paper presents a first, thorough and systematic study of these constraints, which shows that in many cases LLMs perform worse than what is currently widely believed. The paper also illuminates the strong and weak points of these LLMs, and introduces a new benchmark dataset which shows a huge gap between open-source LLMs and GPT-4-Turbo, even when they seem to perform similarly on other benchmarks like HumanEval. Weaknesses: Some of the terminology is not aligned with the widely used meaning of the concepts, which makes the paper harder to read. Most significantly, the paper talks about APIs like sum, max, or reshape. These are usually considered functions and/or deep learning operators , and not APIs. APIs are usually considered to be larger, like the API of a library such as PyTorch, or the API of OpenAI. Also, PyTorch is usually thought of as a deep learning library, not a data science library. Based on this I would probably not call the benchmark DSeval. Also, the parameter constraints are probably also a salient feature of the benchmark and the work, so I would call it something along the lines of DLParamEval, or DLParamConstraintEval. Naming the benchmark is of course the authors's choice, so this is just a suggestion and won't influence my score. The paper doesn't distinguish between GPT-4 and GPT-4-Turbo. The authors usually write about GPT-4 while GPT-4-Turbo is used in the evaluations. Some smaller remarks: - I believe that the last three lines of the input are actually part of the output in Figure 2a. - The explanation of subfigures could be added to the caption of Figure 2 - full program, all parameters, and individual parameters could be explained briefly also near line 66 Technical Quality: 3 Clarity: 3 Questions for Authors: In Figure 5, why does the accuracy drop much more significantly for Linear than for the other functions/constructors? Is this only true for DeepSeekCoder-33B? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There is no section on limitations but I don't believe that one would be necessary as the whole work is about the limitations of LLMs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question-1: In Figure 5, why does the accuracy drop much more significantly for Linear than for the other functions/constructors? Is this only true for DeepSeekCoder-33B?** Thanks for bringing it up! This is an interesting result. For `torch.nn.Linear(in_features, out_features)`, the only constraint is that the `in_features` should match the last dimension of the input tensor. However, the DeepSeekCoder-33B model tends to copy the wrong dimension of the input tensor, likely because it hasn't seen lots of high-rank tensors (rank > 4) in the pre-training data. We further evaluate this phenomenon across different LLMs. Figure 13 (see attached PDF in global response) shows the results for both the full API parameter and single API parameter setting for `torch.nn.Linear` as we increase the difficulty (rank of the input data) across 8 LLMs. Similar to DeepSeekCoder-33B, the performance of other LLMs also drops significantly when rank reaches 4. Afterwards, the performance stabilizes for higher difficulties (i.e., rank > 4) especially for open-source LLMs. This is true for both the full API parameter and single API parameter setting. Surprisingly, we found that even the state-of-the-art GPT-4-Turbo drops in performance when the rank reaches 4. However, we see that GPT-4-Turbo was able to improve its performance in higher difficulties (i.e., rank > 6). After looking at the results, we found that for lower ranks, GPT-4-Turbo tends to use other APIs as “short-cuts” and forgo the analysis on `torch.nn.Linear` directly as shown in an example below: ```python import torch x = torch.randn(1, 8, 10, 10) m = torch.nn.Linear(8*10*10, 3) y = m(x.view(1, -1)) ``` In the above example, instead of using only the last dimension (10), GPT-4-Turbo multiplies all the previous dimensions together and performs a flattening operation (`x.view(1, -1)`). This does not reflect the original meaning of the code and as such is evaluated as incorrect. Thanks for this great question again! We will add this experiment results in the next revision of the paper. **Question-2: Most significantly, the paper talks about APIs like sum, max, or reshape. These are usually considered functions and/or deep learning operators , and not APIs. APIs are usually considered to be larger, like the API of a library such as PyTorch, or the API of OpenAI. Also, PyTorch is usually thought of as a deep learning library, not a data science library.** Thanks for the suggestions! First, please note that deep learning/machine learning are generally considered as subfields of data science [48]. Libraries like PyTorch contain many data process, manipulation, and transformation APIs used for data science operations. Furthermore, the widely-used data science benchmark -- DS-1000 [29], contains both TensorFlow and PyTorch problems. Additionally, please note that we are not only targeting PyTorch, we also include NumPy in our study and we plan to generalize this study to more data science libraries in the future. For libraries like PyTorch and NumPy, indeed “smaller” APIs like `sum`, `max`, or `reshape` are called operators as well, and there are “larger” APIs like `SoftMax`, `BatchNorm`, and `Conv2d` which combine multiple operators; Meanwhile, they can be all considered APIs as well (please see https://pytorch.org/docs/stable/index.html - Python API). We use the term “API” instead of “operator”, because it is more general and accurately describes our target: the *publicly* exposed operators or functions for which we expect LLMs to generate correct parameters. --- Rebuttal Comment 1.1: Comment: Thank you for your answers! It is indeed an interesting result, thank you for obtaining it! Regarding your second answer, I don't believe that Data Science is a subfield of Machine Learning but that's probably a matter of opinion so it won't influence my score. The PyTorch documentation also suggests that APIs are not single functions. If you open it using the link you provided, you can immediately see that the "Python API" refers to the collection of all of the functions, classes, etc.; it's not the case that each function is an API. Similarly, if you go to the "Functional higher level API", for example, you can see that the first sentence reads: "This section contains the higher level API for the autograd that builds on the basic API above and allows you to compute jacobians, hessians, etc.". So the API is the collection of the functions on this page. --- Reply to Comment 1.1.1: Comment: Thanks for reading our response and sharing your thoughts! >The PyTorch documentation also suggests that APIs are not single functions. If you open it using the link you provided, you can immediately see that the "Python API" refers to the collection of all of the functions, classes, etc.; it's not the case that each function is an API. We acknowledge that there is a difference of opinion regarding the definition of an API in terms of DS libraries including PyTorch. Here we refer the reviewer to prior work [a, b, 32] that also considers each individual function/class/etc. as an API. We will definitely make our definition clear in the next revision of our paper. **References** [a] DocTer: Documentation-Guided Fuzzing for Testing Deep Learning API Functions. (ISSTA 2023) [b] Free Lunch for Testing: Fuzzing Deep-Learning Libraries from Open Source. (ICSE 2022)
Summary: This paper investigates the ability of large language models (LLMs) to implicitly learn and apply numeric parameter constraints in data science (DS) APIs, focusing on PyTorch and NumPy libraries. The authors conduct a comprehensive study across 28 representative APIs, evaluating LLMs in three settings: full program generation, all parameter prediction, and individual parameter prediction. They introduce DSEVAL, a benchmark containing 19,600 problems across 12 APIs with varying difficulty levels. The study evaluates both open-source and closed-source LLMs, including state-of-the-art models like GPT-4. Results show that while LLMs perform well on simple constraints and common patterns, their performance degrades significantly with increased difficulty or unusual inputs. The authors conclude that current LLMs, including GPT-4, struggle with complex arithmetic constraints and often rely on memorization of common patterns rather than true understanding of the underlying constraints. This research highlights the limitations of LLMs in handling numeric API constraints and provides a benchmark for future improvements in this area. Strengths: Originality: First systematic study of LLMs' ability to handle numeric constraints in DS APIs. Propose a novel benchmark (DSEVAL) for evaluating this specific capability. Challenges previously untested assumptions about LLM capabilities Quality: Comprehensive evaluation across 28 diverse APIs. Rigorous methodology using SMT solvers for validation. Evaluation of both open-source and closed-source models, including state-of-the-art LLMs Clarity:. Well-structured presentation with clear explanations of complex concepts. Effective use of examples, tables, and figures to illustrate key points. Accessible to readers without deep expertise in LLMs or DS APIs Weaknesses: Absence of error bars or statistical significance tests: Lacks quantification of result certainty. Suggestion: Include error bars or conduct statistical tests to strengthen the validity of findings Limited exploration of prompting techniques: Minimal investigation of advanced prompting methods. Suggestion: Experiment with more diverse prompting strategies to potentially improve LLM performance. Lack of human baseline: No comparison to human performance on similar tasks. Suggestion: Include a human baseline to contextualize LLM performance Limited discussion on potential solutions: Doesn't offer many concrete suggestions for improving LLMs in this domain. Suggestion: Propose and discuss potential approaches to enhance LLM capabilities in handling numeric constraints Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why did you limit your study to PyTorch and NumPy? Wouldn't including more libraries make your findings more generalizable? 2. Your study lacks a human baseline. How can we interpret the LLM performance without knowing how humans perform on similar tasks? 3. You don't provide error bars or statistical significance tests. How confident are you in the reliability of your results? 4. Your paper doesn't explore fine-tuning or more advanced prompting techniques. Couldn't these potentially improve LLM performance significantly? 5. The paper lacks an in-depth error analysis. Wouldn't categorizing common mistakes provide more insights into LLM limitations? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Lack of human baseline: The study does not provide a comparison to human performance on similar tasks, making it challenging to contextualize the LLM performance. Absence of error analysis: The study lacks a detailed categorization and analysis of common error patterns, which could provide deeper insights into specific LLM limitations. No error bars or statistical significance tests: The paper does not include measures of statistical uncertainty, which could strengthen the validity of the findings. Limited discussion on potential solutions: While the paper identifies limitations in LLM performance, it offers few concrete suggestions for improving LLM capabilities in handling numeric constraints. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question-1: Why did you limit your study to PyTorch and NumPy?** We chose PyTorch and NumPy due to their wide adoption in the data science community: PyTorch is used by over 480k open-source GitHub projects [a] and NumPy is installed more than 300 million times monthly [b]. Focusing on these two allowed for a more in-depth analysis within our budget. You are absolutely right that including more libraries will make our findings more generalizable, and we plan to include more DS libraries (e.g., Pandas, Matplotlib, Scikit-learn, SciPy, and Tensorflow) in future work. **Question-2. Your study lacks a human baseline. How can we interpret the LLM performance without knowing how humans perform on similar tasks?** Please note that most code benchmarking work (e.g., DS-1000, MBPP, HumanEval) does not provide a human baseline. This is because it is expensive to conduct user studies on coding problems, especially for DS programs where domain expertise is required. Additionally, people with different experiences can have vastly different performances. Meanwhile, unlike prior benchmarks where the natural language problem description can be unclear and thus imposes an upper bound on LLM performance [c], our benchmark is unambiguous, and we could expect ~100% accuracy from sufficiently strong LLMs or expert developers. In this work, we interpret the LLM performance by comparing different LLMs and different problem complexities. As nicely summarized by Reviewer-bNWw: > (1) Their main finding is that although it is widely assumed that current LLMs can solve arithmetic constraints, the performance of even state-of-the-art LLMs like GPT-4-Turbo drops drastically when the complexity of the constraints increases, and accuracy sometimes even drops to 0. > (2) Another interesting finding is that there is a huge gap between open source models and GPT-4-Turbo, which is not captured by benchmarks like HumanEval. **Question-3: You don't provide error bars or statistical significance tests. How confident are you in the reliability of your results?** Due to the expensive costs, we use greedy decoding in all our experiments except for the full program setting (Section 4.1). Therefore, in all other settings, the performance is deterministic. For the full program setting, we vary the temperature from 0.2 to 1 and sample 200 DS programs across all 28 APIs. As a result, performing repeated experiments to compute error bars will be extremely costly especially for commercial models like GPT-4-Turbo. Furthermore, please note that in all settings of our benchmark, each problem is independently sampled from an identical distribution of its difficulty level. As such, the pass/fail outcome follows a Bernoulli distribution, and we draw N=200 samples for each task distribution. Therefore, using the normal approximation: $p\approx \hat{p}\pm\frac{z_{\alpha}}{\sqrt{N}}\sqrt{\hat{p}(1-\hat{p})}$, where $\hat{p}$ is the average accuracy (proportion of successes), $z_{\alpha}$ is the $1-\frac{\alpha}{2}$ quantile of a standard normal distribution corresponding to the target error rate $\alpha$. For a 95% confidence level, $z_{.05}=1.96$ and $p$ (i.e., accuracy in our results) can be estimated by $p \approx \hat{p} \pm 0.0049$. Therefore, we believe our results are statistically meaningful. **Question-4: Your paper doesn't explore fine-tuning or more advanced prompting techniques. Couldn't these potentially improve LLM performance significantly?** First, kindly note that we study chain-of-thought (CoT) prompting in Section 4.3. Our results on the difficult APIs show that while CoT prompting can indeed boost performance (especially for SOTA LLMs like GPT-4-Turbo), it still struggles to solve more complex arithmetic constraints as difficulty increases. Second, following reviewer YxwD’s suggestion, we evaluate the ReACT prompting strategy in Figure 12 (see attached PDF in global response). We observe that while ReAct can perform better than CoT, it still fails to solve more complex arithmetic constraints. We also follow reviewer WbTY’s suggestion and include API documentation in prompts (Figure 14). We found that adding documentation cannot always achieve better performance on our tasks. Furthermore, we totally agree that fine-tuning may significantly improve the accuracy of LLMs. However the focus of the paper is to evaluate the **implicit** ability for LLMs to model parameter constraints from pre-training. We hope to evaluate fine-tuning as future work. **Question-5: Wouldn't categorizing common mistakes provide more insights into LLM limitations?** Thanks for this suggestion! In our paper, we mainly focus our in-depth analysis based on the constraint types (e.g., equality, inequality, arithmetic, and set-related). Regarding categorizing common mistakes made by LLMs, we did have interesting findings, such as: - **LLMs struggle with uncommon input tensors**: We found that across many APIs and constraints, LLMs struggle when provided with uncommon input tensor ranks (i.e., rank > 4) or uncommon shapes (e.g., `x = torch.rand(9, 30, 23, 4)`). The reason is that LLMs are mostly trained with data that contains very common shapes or ranks. As such, LLMs can easily make mistakes on uncommon inputs. - **LLMs tend to predict common parameter values blindly**: We also observe that LLMs tend to generate common parameter values (e.g., 0, 1, powers of 2) which often turn out to be incorrect. This is again because LLMs are trained with pre-training code that frequently contains such parameter patterns and thus are likely to predict them even given a different input context. Thanks again for all the great suggestions and we will work towards adding them in the next version of the paper! **References** [a] https://github.com/pytorch/pytorch/network/dependents [b] https://pypi.org/project/numpy/ [c] L2ceval: Evaluating language-to-code generation capabilities of large language models. https://arxiv.org/abs/2309.17446 (2023). --- Rebuttal 2: Title: Thanks for your review and great suggestions! Comment: Thanks again for your great suggestions to categorize and analyze the common error patterns that help us strengthen the paper! We would like to share with you our new interesting findings regarding other common mistakes made by LLMs. In additional to 1) struggling with uncommon input tensors; and 2) predicting common parameter values blindly, we found that 3) **LLMs pay attention to the wrong tokens/irrelevant parameters**: LLMs can learn spurious correlations and pay attention to the wrong context tokens. For example, open-source LLMs struggle with the simple equality constraint `in_features=input.shape[-1]` in `torch.nn.Linear` because the attention weights are focused on the irrelevant parameters. **Attention analysis:** To give an example, for the DeepSeek-1.3B model and the API `torch.nn.Linear` with difficulty of rank=5, the input tensor has 5 dimensions and is presented in context like `x = torch.randn(5, 4, 2, 14, 11)`, and the model is asked to predict a parameter `in_features`, with the constraint being `in_features==input.shape[-1]`. For this simple task, the LLM only achieves 6% accuracy. To investigate why, we compute the attention weight of each input token (maximum of all attention heads across all layers) to locate the most significant token among the relevant ones (e.g., `5, 4, 2, 14, 11` in the previous example). Next, we map it to a specific dimension of the input tensor (i.e., `0, 1, 2, 3, 4`). Here’s the result: Out of the 200 tests, here is the number of times that the predicted value matches each input dimension: `{0: 30, 1: 113, 2: 40, 3: 19, 4: 12}`. The correct answer should be `4`, the last dimension of a rank-5 tensor. Note that the sum of this is not 200 because the predicted value can match more than one input dimensions if they are identical. We observe that the predicted value almost always (98%) matches with the context token that has the highest attention weight. This indicates that the LLM does learn to always copy a specific dimension from the input tensor. As for the detailed break-down: (1) When the attention is paid to the wrong context token/parameter value (92%), it leads to incorrect results; (2) For 1% of the times, it pays attention to the right dimension; (3) Interestingly, for 5% it copies from the wrong position but that specific value just happens to be correct. We will incorporate your suggestion of including comprehensive error pattern categorization, the attention visualization results, and other remarks in your review in our next revision. Thanks again for your support and valuable comments!
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments and suggestions to improve the paper! We address the main questions and concerns in this rebuttal. Furthermore, we also plan to revise the paper accordingly to address all other minor suggestions and comments. We have also attached a PDF corresponding to the new experimental results requested by the reviewer, please kindly see the attached PDF for the detailed result figures. Please kindly let us know if there is any misunderstanding of the questions, and we are very happy to provide further updates or clarifications during the reviewer-author discussion period. Pdf: /pdf/0d48d5489a0f053696a9c6cbe5adb53ffb2cfa3f.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper constructs a benchmark, DSEval, of 19600 programs across 28 data science (DS) library APIs with numerical constraints and uses the benchmark to invesitigate the capability of LLMs in generating valid DS programs which satisfy those numerical constraints. Additionaly, this paper categories the constraints into four different groups: equality, inequality, arithmetic and set-related. The experiments include 3 different generating settings: 1) full program, 2) all parameters, and 3) individual parameters, and study 8 LLMs including both closed-source and open-source models. The paper shows that LLMs are great at generating simple DS programs, but the performance of LLM drops significantly when the difficulty increases. Strengths: + The paper targets at an important problem. + The paper is easy to follow in most places. Weaknesses: - The paper lacks important details. - Some claims need better justification. - The experiment is limited in scale in terms of the number of studied APIs. One contribution of the paper is constructing the benchmark and the paper claims that the design of the benchmark is general and can be easily extended to additional libraries. However, the paper lacks details of how the benchmark is constructed, making it difficult to evaluate this claim. First, lines 11/63/165 state the benchmark contains 28 APIs while line 80 says 12 APIs. The authors need to clarify on this. Second, how the 28 or 12 APIs are selected? what does it mean by representative? Third, do these constraints exist? If so, were they written by developers? What format do these constraints take? Are they in natural language or a formal format? If they are in natural language, how are they converted into a formal format? Forth, are these 19600 programs from existing projects? Fifth, the number of such APIs is large, but the paper only studies 28 or 12 APIs which is pretty a small scale. Regarding the full program setting, how the 3-step instuction is contructed for individual program? From the example, I can see 3 lines of instructions and 3 lines of code. Are all the programs composed with 3 lines of code? I believe it should not be the case. If there are more than 3 lines of code, does it mean that there will be more than one line of code for each instruction? The paper uses SMT solver to validate the correctness of the generated program. It checks the validation based on constraits. Since the constraints for sepecific for one API. I believe SMT solver can verify the correctness of individul statement. How about the logic of the whole piece of the generated program? It would be better to clarify on this. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Why only 28 or 12 APIs are studied given a large number of APIs? 2. Could you elaborate on the details of the API constraints used in constructing the benchmark? Please see the detailed comments for clarification points. 3. How does the proposed work verify the correctness of the logic of a full program? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The paper provides a discussion on limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question-1: Why only 28 or 12 APIs are studied given a large number of APIs? What does it mean by representative?** Good question! Please note that although there are a large number of APIs, the commonly used ones (e.g., `Conv2d`) are not that many. For example, the widely-used benchmark DS-1000 [29] contains 1000 DS problems collected from StackOverflow problems (including 68 PyTorch problems and 220 NumPy problems), reflecting realistic use cases; meanwhile, it only covers 16 PyTorch APIs and 59 NumPy APIs (after excluding data construction APIs like `np.ones`). NNSmith [32], a popular DNN generator for testing ML libraries, chooses to support only 73 core operators that are commonly used in DL programs. Furthermore, not all of the commonly used APIs have numeric parameter constraints. As for our API selection process, first, we followed prior work NNSmith and examined all 73 core operators it supports. Next, we select the 22 core APIs that have numeric parameter constraints and add additional 6 APIs to obtain the 28 APIs used in our study in the full program prediction setting (Section 4.1) and the full API parameter prediction setting (Section 4.2). Furthermore, we choose 12 APIs to cover the representative types of numeric constraint for detailed analysis in the single API parameter prediction setting (Section 4.3) and in our DSeval benchmark (Section 4.4). We use “representative” to mean representative with respect to the numeric parameter constraints in DS library APIs. Table 1 shows the categorization of the different types of numeric constraints that exist in DS libraries. Our selection criteria aim to select a list of APIs that have interesting numeric parameter constraints that can cover all the major constraint categories. You can also find a complete list of the 12 APIs and their corresponding constraints in Table 3 in the Appendix. **Question-2: Could you elaborate on the details of the API constraints used in constructing the benchmark?** In this work, we focus on the numeric constraints that are part of the popular DS APIs. These constraints are directly embedded into each individual API. In other words, to use these APIs, the generated DS code must satisfy these constraints. Please refer to Figure 1 for an example of a DS API and its corresponding constraint. The constraints are defined by developers according to the functionality of each DS API. These constraints are usually specified in natural language within the API documentation. In our study, we manually encode these constraints as satisfiability modulo theory (SMT) formulas and use an SMT solver (Z3) to check if the parameter values generated by the LLM are correct. Please see Section 2.3 for more detail. The 19,600 programs in our DSeval benchmark are randomly generated by us and are not taken from existing projects. Each benchmark problem requires the LLM to generate a single parameter for an API in order to produce a valid program. To create these problems, we randomly generate values for both the input data and the other parameters in the target API. Using the encoded constraints, we ensure the problem is valid (i.e., the constraints are satisfiable) and retry if it cannot be satisfied. Again, please see Section 2.3 for more detail on the input generation process. **Question-3. Regarding the full program setting, ... Are all the programs composed with 3 lines of code? I believe it should not be the case. ... How does the proposed work verify the correctness of the logic of a full program?** Thanks for asking the question, and we believe there may be some misunderstanding. Please note that this paper focuses on ***simple DS programs with only a single API call*** in all settings, including the ***Full program*** setting (see Line-109: “For the full program setting, we want the LLM to synthesize a complete DS program using a specific API from scratch”). We present the full prompt in Figure 2. While LLM-generated programs may contain arbitrary lines of code (e.g., sometimes there are more computations after calling the target API), we extract and focus on the first few code statements only (i.e., the input data generation statements followed by a single API invocation statement). We apologize for any confusion and will revise our presentation for clarity. ***Rationale of studying a single API***: First, isolating the evaluation to individual APIs or individual API parameters makes it easier to analyze the result. Such a fine-grained setting facilitates a detailed examination of the LLMs’ limitations with respect to various types of numerical constraint. Second, given that LLMs already struggle with constraints within a single API, we believe that expanding the benchmark to multiple APIs would likely yield accuracy too low and difficult to interpret meaningfully. We appreciate the reviewer's comments and will include a discussion of this future work in our revised manuscript. ***Extend to multiple APIs***: Please note that our method can easily be extended to more complex programs, which is essentially a computation graph consisting of multiple operators and their connections. Since we already symbolically model each individual operator (including input, output, constraints, type transfers), we can also combine these operators and symbolically generate and validate a full computation graph. To achieve this, we can reuse and modify the generation and validation framework provided by NNSmith [32], a popular tool which generates diverse DL computation graphs via formal constraint solving. For more details, please refer to the NNSmith paper. --- Rebuttal Comment 1.1: Comment: From Area Chair: This reply is from reviewer qmKc. I believe it was accidentally posted without the "visible to authors" box checked. :-) --- Thank you for the rebuttal. I would keep my ratings after reading the rebuttal and some thinking. My main concern is the number of PIS and complexity of the problem. Compared with similar benchmarks in such kind, the evaluation is small and not comprehensive. The rebuttal mentions that this can be extend without real evidence. --- Rebuttal 2: Comment: To Area Chair: Thanks for noticing and posting the comment! To Reviewer qmKc: Thanks for reading our rebuttal and for your reply! > My main concern is the number of PIS and complexity of the problem. Compared with similar benchmarks in such kind, the evaluation is small and not comprehensive. We kindly point out here that DSeval is the **first** benchmark targeting the validity of DS API parameter constraints, consisting of **19600** different problems that span **fine-grained** settings and difficulty levels for 12 APIs, and we evaluated across 8 state-of-the-art open/close source code LLMs. For comparison, DS-1000 [29], which is the closest benchmark we found in the DS code generation domain, contains 1000 problems. Would you kindly share examples of “similar benchmarks in such kind” you have in mind? Please allow us to further clarify our criteria for dataset construction. Firstly, our chosen APIs are the core APIs commonly used by users. While there are a large number of APIs in DS libraries, the commonly used ones (e.g., Conv2d) are not that many. For example, the widely-used benchmark DS-1000 [29] only covers 16 PyTorch APIs and 59 NumPy APIs (after excluding data construction APIs like “np.ones”). Additionally, not all the APIs have numerical parameter constraints. For example, NNSmith [32] supports 73 core operators that are commonly used in DL programs, and only 22 of them have numerical constraints that fall in the scope of our study, and we’ve included *all 22* of them (plus 6 additional ones) in our first two settings (full program generation and all parameter generation). Furthermore, some APIs have similar constraints or same constraint types. For example, numpy.max and numpy.min have the same constraints, whereas torch.nn.Conv2d and torch.nn.Conv1d have very similar constraints. Therefore, in our DSeval benchmark, we select 12 representative APIs to keep the experiments at an affordable scale, while still ***covering all major constraint categories*** in Table 1. We believe this is a representative and **comprehensive (in terms of constraint types)** set of APIs and constraints used to evaluate LLMs’ capabilities. > The rebuttal mentions that this can be extended without real evidence. Thanks for asking this follow-up question. We apologize that we didn't include concrete evidence in our initial response, as it wasn't specifically requested at that time. Please find the supporting evidence below: We can extend the experiment to new APIs with little engineering effort. For example, we’ve already added torch.nn.Linear into our benchmark during the rebuttal as suggested by reviewer bNWw, and obtained interesting results. Building upon the original framework, we only need to add 17 lines of code for the newly supported API. We didn’t include more APIs due to computing budget constraints, but we are happy to extend the study to all 28 APIs and even more if the reviewer thinks it is crucial. As we also mentioned in our previous response, the reason that we can easily support the addition of new APIs is because our framework uses symbolic constraint solving techniques to both generate and validate them. New APIs can be added by simply encoding the numeric constraints (written as basic z3 formulas). Similarly, we can also add more complex chained API sequences using our framework by again symbolically modeling each API to build the symbolic computation graph and construct a program. Please note that we built our framework upon NNSmith [32], and thus such graph-level modeling is intrinsically supported and only requires minimal modification to support our use case. More concretely, **to verify the correctness of the logic of a full program, we just need to symbolically build the API graph, propagate the tensor shapes, and verify the validity of each API.** To give a more concrete example, let us consider two APIs with *symbolic shapes and symbolic parameters*: - F: Input shape [x,y], parameter a; Output shape [x, a]; Constraint: a<y - G: Input shape [x,y], parameter b; Output shape [x, y-b]; Constraint: y>b Chaining F(G(input)) with symbolic input [u,v]: 1. G([u,v]) -> [u,v-b]; Constraint: v>b 2. F([u,v-b]) -> [u,a]; Constraint: a<v-b Note that we have already symbolically modeled the shape transfer rules and validity constraints of each individual API. Therefore, the whole process uses symbolic expressions, allowing seamless propagation and validity checking through the API chain without modifying individual APIs. Please let us know if our response has addressed your concern, and we are more than happy to provide further evidence if the reviewer is willing to specify which aspects they’d like us to elaborate on or what particular evidence they’re seeking.
null
null
null
null
null
null
Beyond Accuracy: Tracking more like Human via Visual Search
Accept (poster)
Summary: The authors touch upon a very important topic in visual tracking. They try to build upon the Central-Peripheral Dichotomy (CPD) theory that talks about how humans use visual information to track targets in complex environments. To this end, they propose a tracker named CPDTrack and STDChallenge Benchmark (for spatio-temporal discontinuities). They claim that their CPDTrack achieves SOTA performance on STDChallenge, aligns with human behavior and generalizes across various benchmarks. Strengths: The authors clearly highlight the importance of aligning visual tracking in both humans and machines, and to that respect also conduct a human study. They also cover the related works decently well to highlight the previous works in the field. The distinction they made between LTT and GIT, and the attempt at doing a Visual Turing Test makes the paper stand out. The absolute results presented on the STDChallenge also show performance gains compared to the other networks (albeit small). Weaknesses: The authors do not have IRB which is one of the basic requirements for working with human subjects. I also did not see any strong statistical testing for human vs machine responses. Seems like to authors selected responses from all human participants. If not, i don’t see any methods for dropping the out of distribution responses. The authors also conducted 5 experiments with human subjects, but i didn’t find a clear link between the selected experiments for humans and the STDChallenge benchmark. There is also no justification for why removing information query was treated as baseline in ablation studies. Please justify. For the STDChallenge benchmark, i didn’t find a justification as to why only the said challenges were included in the benchmark. The authors should clarify that if possible. How easy/hard is it to include newer challenges in the benchmark? Technical Quality: 3 Clarity: 3 Questions for Authors: There is another similar algorithm called DorsalNet. Have the authors considered the similarity/differences with it? For the citation of PathTracker, the paper cited is InT which is a solution to the PathTracker challenge. For completeness, the challenge was introduced in https://arxiv.org/abs/2110.02772 which the authors should consider citing. In fig 5 the authors say that both CPDTrack and humans are robust to occlusions, environment changes and absence of target. InT has similar claims; have the authors tested their benchmark with the TransT+InT proposed in the paper ``Tracking without re-recognition”? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The gains are small compared to other networks on the STDChallenge benchmark. Have authors made an attempt to check if they are statistically significant? Absence of IRB could raise ethical concerns which is why authors should work to mitigate it. In appendix E4, the authors talk about mouse movements. Are the authors tracking mouse movements from the participants’ computers? If so, that certainly calls for IRB approvals due to additional security and privacy concerns. Figure 11 is not clear enough as to where/how the humans fit. The authors should make clarifications about what they mean by the heatmap in terms of errors. The model presented in fig. 3 is rather convoluted. If possible, the authors should clarify it as well. The authors mention releasing code, but i didn’t find anything with the paper. The authors should make sure the code is released. If possible, they should also provide scripts/links to download the public datasets they used for better reproducibility. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects', 'Ethics review needed: Human rights (including surveillance)'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback! We will release our code upon acceptance and include the new analyses below in the revision. Here are our responses to the following points: **IRB**: Previous studies [a] have demonstrated that such experiments only involve interaction between human subjects and computer systems (screen, mouse), posing no risks throughout the experiment, thus not requiring IRB review. Our experimental paradigm is similar to theirs. Under our consideration, before the experiment, participants were thoroughly briefed and confirmed their full understanding of the process by signing comprehensive instructions, ensuring clarity on the experimental procedures. We appreciate your reminder; we have communicated with our institution's IRB, submitted the necessary materials, and obtained IRB approval. **Statistical Testing**: Referencing [a], we focused on ensuring the accuracy and validity of data collected from a limited number of subjects to accurately reflect human visual capabilities. Consequently, we implemented measures to prevent out-of-distribution responses, enabling the use of all responses. 1) We did not use crowdsourcing platforms; instead, the Visual Turing Test were conducted in a laboratory environment under supervision. 2) Before the experiment, we assessed participants' cognitive and perceptual levels to ensure normalcy, and allowed them to familiarize with the equipment through basic exercises. 3) During the experiment, participants were permitted to pause the experiment a certain number of times voluntarily to adjust their state. 4) In cases where the mouse cursor moved off-screen, we retained the results from the previous frame until the cursor returned within the screen area. **Visual Turing Test**: In the paper, we provide a detailed description of the Visual Turing Test process, which aims to study the differential capabilities between algorithms and humans under the STDChallenge. You can refer to sec. 4 and E for more details. We aligned the experimental procedures and metric calculations between humans and algorithms, allowing for comparison within the evaluation pipeline. **The Setting of the Baseline**: CPDTrack aims to study the effectiveness of CPD theory in the design of artificial neural networks (ANN). Moreover, from a model design perspective, the CPD motion model and information query are decoupled, allowing for stepwise ablation experiments on CPDTrack. 1) We examine the performance of the CPD Motion Model itself. This is achieved by comparing CPD-only central vision-only peripheral vision. 2) We center around the CPD Motion Model and enhance the encoder-selector-decoder framework by incorporating information query, validating the effectiveness of cognitive feedback control in ANN. 3) We adjust the CPD Motion Model part in the ANN and align it with traditional tracking networks to demonstrate the effectiveness of our designed CPDTrack. **New Challenges**: As described in our Introduction and Related Work, STDChallenge is recognized by many studies for distinguishing STT from more challenging tasks like LTT and GIT, which are considered to represent more realistic video environments. In our setup, a sequence with a shotcut or disappearance-reappearance is deemed to include STDChallenge, naturally integrating challenges found in new real-world scenarios. Thus, STDChallenge not only incorporates but also enhances other challenging attributes to some extent, as depicted in Fig8(c), such as disappearance-reappearance is often associated with occlusions or out-of-view. Formally, STDChallenge does not exclude other tasks (such as pathtracker), but we advocate integrating STDChallenge with real-world scenarios to avoid ill-posed issues (such as shotcut or occlusion confusing same-looking blocks in pathtracker). **Compare with DorsalNet**: We identified the most related papers using networks named DorsalNet, which we believe fundamentally differ from our CPDTrack [b]: Proposes a new hypothesis on dorsal visual pathway neurons' function during self-motion, using a 3D ResNet model to explain non-human primate neuron activity patterns and their selectivity to motion stimuli, enhancing understanding of animal behavior and self-localization in dynamic environments. Similarities: Both approaches are based on cognitive science theories to construct models, and focus on research involving moving targets. Differences: 1) Our model is based on established cognitive science theory (CPD theory), not assumptions. 2) Instead of studying humans or neurons, our goal is to enhance visual object tracking algorithms using human capabilities and explore differences in behavior (Science4AI). 3) Our research focuses on tracking other moving targets, not self-localization. **Compare with PathTracker**: Thank you for your reminder. We really learned a lot from pathtracker, and will include them in the revised version. Comparison is in the supplementary pdf Tab. R1. **Figure 11** is designed to show the error consistency among different decision-makers (machines/humans) in the STDChallenge, calculation methods can be referenced from [a]. This represents the degree of behavioral similarity between different decision-makers. For each sequence, we calculate the error consistency on each frame, and then average these across all sequences to obtain the overall error consistency among the decision-makers. **Figure 3** is divided into two parts. The upper part shows the relationship between visual acuity and eccentricity and the "encoder-selector-decoder" human vision framework found in cognitive neuroscience. The lower part illustrates our proposed CPDTrack model. The grey arrows running through both parts highlight the correspondence between the two parts. [a] Partial success in closing the gap between human and machine vision. Neurips 2021 [b] Your head is there to move you around: Goal-driven models of the primate dorsal pathway. Neurips 2021 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their time and efforts in answering the questions and clarifications. I would urge them to please include these clarifications in the final version of paper as well. Thanks. --- Reply to Comment 1.1.1: Comment: Thank you for your review. We will incorporate additional clarifications following your clarifications and acknowledge the anonymous reviewers in the revised version of our paper. Thanks.
Summary: This paper presents a novel approach to visual object tracking by drawing inspiration from the Central-Peripheral Dichotomy (CPD) theory. The proposed CPDTrack aims to improve tracking performance by emulating human visual search mechanisms, particularly under challenging scenarios involving spatio-temporal discontinuities (STDChallenge). The paper introduces a new benchmark, STDChallenge, and demonstrates the effectiveness of CPDTrack through extensive experiments. Strengths: 1. The idea of incorporating human visual search mechanisms into object tracking is both innovative and promising. The use of the CPD theory to separate central and peripheral vision for enhanced tracking performance is well-motivated and addresses a significant gap in existing tracking methodologies. 2. The creation of the STDChallenge Benchmark is a significant contribution. 3. The experimental results are impressive, showing that CPDTrack achieves state-of-the-art performance in the STDChallenge Benchmark. Weaknesses: 1. CPDTrack is designed specifically to address the STDChallenge, and as noted in the paper, it may not perform as well in simpler or different scenarios. Despite the explanations provided, this lack of generality is a significant concern. 2. The paper states that the tracking speed on an A5000 GPU is 23fps, is this because the introduced modules are particularly time-consuming? This represents a considerable demand for computational resources, which could be a factor to consider in practical applications. Technical Quality: 2 Clarity: 3 Questions for Authors: See the weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: This paper does not include a dedicated section discussing the limitations of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback! We will include the new analyses below in the revision. We provide responses to the specific points below: As described in the paper, compared to existing algorithms, CPDTrack's advantage lies in its more robust performance in STDChallenge and its behavior more closely resembling that of humans. This also serves to validate the accuracy of the CPD theory to some extent. --- **Generalization of CPDTrack:** We believe that many simple scenarios in Single Object Tracking (SOT) are based on laboratory settings and even possess characteristics of toy scenarios (for example, tracking smoothly moving targets within very short time) to some extent. In contrast, CPDTrack extends much further into the real world: - Local trackers introduce stronger priors, which enhance their performance in STT scenarios. Therefore, STDChallenge is fatal for them [a,b]; however, CPDTrack effectively addresses this challenge. - Setup of the STDChallenge Benchmark is that any sequence containing at least one "disappearance-reappearance" or "shotcut" is considered to include the STDChallenge. It is evident that the STDChallenge Benchmark already includes many relatively simple video environments, as shown in Fig. 10 (1) in the paper and Fig. R2 in the supplementary PDF. - Additionally, experiments have shown that humans perform worse than state-of-the-art (SOTA) trackers in simple scenes. This is likely because local trackers can more precisely focus on the target due to their local field of view. Moreover, the inclusion of a global perspective allows humans and CPDTrack to have a more comprehensive understanding of the target, which, although not incorrect, differs from the dataset's setup, such as "lion's tail" or "the tail rotor of a helicopter", refer to Fig. R3 in the supplementary PDF. However, in complex scenarios, human capabilities to track moving targets surpass those of trackers, a trait also inherited to some extent by the human-like modeling of CPDTrack. --- **Computation overhead of CPDTrack:** We acknowledge that compared to some mainstream trackers, CPDTrack's human-like modeling causes some additional computational costs. However, in our experimental setup, CPDTrack achieves 20-30 fps, which already meets the requirements for real-time performance. Moreover, the main contribution of our paper is to validate the effectiveness of human-like modeling through CPDTrack, and the experimental conclusions have affirmed the motivation behind our work. While there are computational costs, these do not overshadow the contributions of CPDTrack. We greatly appreciate your questions and will continue to explore improvements in future work. --- **Limitations:** Overall, our work has the following shortcomings, which we will further address in future research: 1. **Specialization on STDChallenge:** CPDTrack focuses on the STDChallenge, resulting in suboptimal performance in some idealized simple scenarios where it underperforms compared to state-of-the-art (SOTA) trackers on certain datasets. 2. **Increased Computational Overhead:** The human-like modeling approach adds overhead, making CPDTrack less immediately applicable to real-world applications as it currently stands. 3. **Long-Tail Distribution in STDChallenge Benchmark:** The STDChallenge within the STDChallenge Benchmark still exhibits a long-tail distribution, which is somewhat distant from real-world conditions. How to increase data under specific challenges remains an open problem that needs addressing to bridge this gap. --- [a] Hu, S., Zhao, X., & Huang, K. (2024). SOTVerse: A user-defined task space of single object tracking. International Journal of Computer Vision, 132(3), 872-930. [b] Fan, H., Yang, F., Chu, P., Lin, Y., Yuan, L., & Ling, H. (2021). Tracklinic: Diagnosis of challenge factors in visual tracking. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 970-979). --- Rebuttal Comment 1.1: Comment: I appreciate the authors' time and effort in addressing the questions and providing clarifications. I encourage them to include these explanations in the final version of the paper. --- Reply to Comment 1.1.1: Comment: Thank you for your review. We will incorporate additional explanations following your suggestions and acknowledge the anonymous reviewers in the revised version of our paper.
Summary: - The authors generalize the visual tracking problem as human dynamic visual ability task, and propose a new benchmark named STDChallenge to evaluate the visual tracking algorithms. This new benchmark includes challenging scenarios with spatio-temporal discontinuity, where previous short-term tracking oriented algorithms fail to address. They also propose a new tracking algorithm oriented to the new task where it mimics the human visual search behavior, and evalute its effectiveness on multiple benchmarks. Strengths: - A new large-scale benchmark is always a welcome addition to the computer vision and machine learning community, where the new generalized object tracking task proposed by the authors is more challenging in some aspects. - The authors evaluate multiple state-of-the-art tracking algorithms on their proposed dataset, verifying the need for the proposed dataset and providing some insights for solving the new task. - A new tracking algorithm CPDTrack oriented for the proposed task is provided, showing some new directions for solving the visual tracking problem under spatio-temporal discontinuity. Weaknesses: - Comparision with VideoCube [3] dataset Since the proposed dataset includes video sequences with spatio-temporal discontinuities, it bears large similarities with the VideoCube dataset proposed in [3], where it includes sequences that require global search and target recovery after disappearances. In what aspects do the proposed dataset differ from VideoCube? (i.e. sequence length, object category, discontinuity etc.) - Proposed CPDTrack algorithm The proposed CPDTrack utilizes global and local search simultaneously, which is implemented by using two search images cropped at different search areas of large and small sizes. Although the authors provide an explanation that the proposed method was inspired by cognitive neuroscience, this approach was widely used in previous visual tracking algorithms. - [a] Effective Local and Global Search for Fast Long-term Tracking, TPAMI 2022 - [b] Learning Regression and Verification Networks for Robust Long-term Tracking, IJCV 2021 - [c] ‘Skimming-Perusal’ Tracking: A Framework for Real-Time and Robust Long-term Tracking, ICCV 2019 How does the proposed method differ from these algorithms? Also, these papers should be included in the references. Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the questions in the weaknesses section. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors included a separate discussions and limitations section in their paper, with adequate explanations and descriptions on the weaknesses and possible future directions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback! We will include the new analyses below in the revision. We provide responses to the specific points below: **Differences with VideoCube:** VideoCube is a benchmark specifically designed for the GIT task, consisting of 500 sequences characterized by frequent shotcut and extra-long sequences. Unlike VideoCube, the STDChallenge Benchmark is aimed at integrating LTT and GIT, providing a more comprehensive video environment for the study of spatio-temporal discontinuities. Derived from multiple large datasets, the STDChallenge Benchmark not only addresses the bias of single datasets—a focal point in AI research—but also exhibits the following characteristics compared to VideoCube: - **Data volume:** The test set of STDChallenge Benchmark contains 252 sequences, significantly more than the 50 sequences in VideoCube. - **More rational division of challenge attributes:** Referencing [d], we have standardized the challenge attributes across sequences from different datasets, marking the sequences in various datasets with the same set of challenge attributes. Furthermore, we have quantified the difficulty of the STDChallenge across sequences. - **More diverse dataset distribution:** As shown in Fig.8 in the paper and Fig.R1 in the supplementary pdf, the greater volume of data allows the STDChallenge Benchmark to include a wider variety of sequences with different STD; the length distribution is also broader. This facilitates a more comprehensive evaluation of trackers. --- **Differences with local-global trackers:** We discuss these works in the related work section and Fig. 2 of the paper. These algorithms are referred to as local-global trackers in the paper, characterized by a "switching" module that decides whether to switch to global re-detection based on the performance of local trackers. We believe that this is fundamentally different from CPDTrack: The key issue in the design of local-global trackers is how to devise a switching strategy between local tracking and global re-detection. In existing algorithms, the decision to switch from local tracking to global re-detection is still entirely determined by the local tracking predictions. This means that when making the switching decision, information outside the local search area is still ignored. If the actual target is not within the local search area, it increases the risk of the algorithm mistakenly identifying a distractor as the target instead of activating the global re-detector. We analyzed the three papers you referenced: - [a] uses the score from a target verifier within the local search module to decide whether to switch strategies; - [b] employs a verification network with local view to identify the target from the detected candidates. If the target disappears, the learning based switching scheme determines whether to trigger the global search mode. - [c] uses a verifier in the 'perusal' module to judge the confidence score of the target's presence, thereby deciding whether to switch strategies. CPDTrack, on the other hand, possesses both local and global perspectives, mitigating the risk of drifting to distractors in the local area in STDChallenge, and enhancing the tracker's visual search capabilities. These three excellent papers are significant in the evolution of visual trackers; therefore, we will include a discussion of these works in the related work section. --- [a] Zhao, H., Yan, B., Wang, D., Qian, X., Yang, X., & Lu, H. (2022). Effective local and global search for fast long-term tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1), 460-474. [b] Zhang, Y., Wang, L., Wang, D., Qi, J., & Lu, H. (2021). Learning regression and verification networks for robust long-term tracking. International Journal of Computer Vision, 129(9), 2536-2547. [c] Yan, B., Zhao, H., Wang, D., Lu, H., & Yang, X. (2019). 'skimming-perusal'tracking: A framework for real-time and robust long-term tracking. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 2385-2393). [d] Hu, S., Zhao, X., & Huang, K. (2024). SOTVerse: A user-defined task space of single object tracking. International Journal of Computer Vision, 132(3), 872-930. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response to my questions, and the additional details provided by the authors were very helpful. Also, based on the concerns from the other reviewers and the authors' response to address these issues, I am inclined to raise my rating to "weak accept". --- Reply to Comment 1.1.1: Comment: We are grateful for your increased rating to Weak Accept. We will incorporate the additional results and analyses following your suggestions and acknowledge the anonymous reviewers in the revised version of our paper.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their efforts and invaluable suggestions. This work is inspired by the cognitive science theory of the central-peripheral dichotomy (CPD) and introduces a tracker, CPDTrack, designed to address the STDChallenge. Its effectiveness and similarity to human behavior have been validated through benchmark tests and the Visual Turing Test. We are also very grateful that all reviewers have given our work high praise (accept *1, weak accept *1, borderline accept *1), which encourages us to continue working courageously within the 'Science for AI' research paradigms. Thank you for the valuable reviews pointing out that 1) we focus on a challenging yet valuable issue (the problem of spatio-temporal discontinuity in videos) and acknowledge the proposed STDChallenge Benchmark (x2Xs, 17n1), 2) it is highly novel to integrating CPD theory into visual object tracking and affirm the performance of CPDTrack (x2Xs, 17n1, 16gz). 3) our evaluation on the STDChallenge Benchmark is comprehensive (x2Xs, 16gz) and note that the Visual Turing Test is particularly noteworthy (16gz). Prompted by the insightful reviews, we mainly present the following additional experimental results and analyses for the common questions: - Reviewer x2Xs recognizes the performance and value of CPDTrack. Your concerns primarily focus on the novelty aspects of CPDTrack and the STDChallenge Benchmark. Some ambiguous descriptions may have led to confusion between our work and others. We will clarify these distinctions using figures in the paper, supplementary figures, and references, highlighting the fundamental differences between our proposed method and existing approaches. - Reviewer 17n1 affirmed the novelty of several aspects of our work. Your concerns mainly focus on the generalizability and costs associated with CPDTrack. We will provide a comprehensive analysis of CPDTrack's advantages and potential limitations. Moreover, we believe that CPDTrack's extended applicability in real-world scenarios worth the associated costs. - Reviewer 16gz affirmed the novelty and performance of our work. Your concerns primarily focus on the evaluation experiments conducted on the STDChallenge Benchmark. We will further explain the scalability of the STDChallenge Benchmark and provide additional details on the setup of the Visual Turing Test. Thank you for your valuable suggestions. We will address each of your concerns individually in our responses. Additionally, we have included a supplementary 1-page PDF, which presents some attributes of the STDChallenge Benchmark and visualization in graphical form, aiming to provide you with a clear and intuitive understanding of the details of our work. We also look forward to discussing further with the reviewers during discussion. Should you have any more questions at that time, please feel free to engage with us. We hope to improve our work with your assistance. Pdf: /pdf/631f672044a3fdb53e4a804f2caf69ab6527db06.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Kernel Perspective on Distillation-based Collaborative Learning
Accept (poster)
Summary: This work presents an analysis of a distillation-based collaborative learning algorithm called FedMD from a nonparametric perspective. The method is to adopt an operator theoretic approach to obtain an upper rate for the expected generalization error of local models. Then, the authors propose DCL-KR to achieve a privacy-preserving nearly minimax optimal collaborative learning algorithm with kernel regression in massively distributed statistically heterogeneous environments. Strengths: + The idea seems to be interesting. + The first work tries to prove the (nearly) minimax optimality of a privacy-preserving collaborative learning algorithm. Weaknesses: - The paper's organization and structure are not clear. - The writing looks like combining different parts of different papers without a clear clue. - The paper suddenly gives many assumptions in Section 3 for analysis but does not give any intuition or insight into them. - Some important experimental results are in the appendix. I would suggest that the authors extract insight into the experimental findings in the main body. - The language used in the paper is not concise. Technical Quality: 2 Clarity: 1 Questions for Authors: The related work only gives a rough summary of the previous works. When the authors focus on the theoretical analysis, could the authors compare their theoretical bounds to other works? There are many important experiments in the appendix, such as C.3.1. Can the authors summarize empirical findings with a better paper structure? In Section 3.3, the paper starts with "To derive theoretical results" and lists a lot of assumptions without any intuitive understanding behind them. Could the authors give an overview and intuition of theoretical results? Minor: I feel very confused when reading the paper as it introduces many conclusions/results without too much explanation. I suggest the authors elaborate on the intuitions before going into the details. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: The paper presents a discussion of limitations, which is not convincing. For example, "there is no rigorous study that discusses the privacy preservation advantages of distillation-based collaborative learning". Actually, the standard technique for privacy is not distillation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the detailed comments and hope to address all of the questions and concerns raised by the reviewer. > The paper's organization and structure are not clear. > The writing looks like combining different parts of different papers without a clear clue. We attempted to write the manuscript well, but it seems that our manuscript did not effectively convey its content to the reviewer. Please refer to the general response. Our general response contains the outline of the overall flow of our manuscript and an additional explanation on the connection between DCL-KR and DCL-NN. Through this general response, we hope that the reviewer finds our manuscript to have a natural flow. > In Section 3.3, the paper starts with “To derive theoretical results” and lists a lot of assumptions without any intuitive understanding behind them. Could the author give an overview and intuition of theoretical results? We are sorry to hear that the reviewer had a difficulty to read our theoretical analysis. In the future version, we will provide overview and intuition of theoretical results at the beginning of Section 3.3 as follows: * In this subsection, we prove the nearly minimax optimality of DCL-KR; * This result implies that DCL-KR has an almost same convergence rate as the minimax optimal central training when there are sufficiently many public inputs; * To the best of our knowledge, this is the first work to prove the nearly minimax optimality of a privacy-preserving collaborative learning algorithm with kernel regression in massively distributed statistically heterogeneous environments (Lines 179 – 181 in our manuscript). We also provide a detailed explanation of assumptions from Section 3 in the general response. Please refer to the general response for details on this matter. > There are many important experiments in the appendix, such as C.3.1. Can the authors summarize empirical findings with a better paper structure? Due to the page limitation, we only included some of the experimental results and discussions in the main body. However, we agree with the reviewer that some important experimental discussions should be included in the main text to convey meaningful content and improve its structure. If permitted, we will utilize an additional page to incorporate the following results from the appendix into the main body. * We will include Figure 3, 4, and 5(c)-(d) in the main body. We will include the following discussions in the main text: * As shown in Figure 3, DCL-KR outperforms the baselines in all experimental settings and achieves comparable performance to the central models. In contrast, DC-NY and DKRR-NY-CM exhibit significantly lower performance compared with DCL-KR in massively distributed environments where their theory does not cover. * IED does not show a significant performance drop in massively distributed environments. However, as predicted by the theoretical results, to perform well, IED requires more public inputs compared with DCL-KR. Moreover, when there is a public distribution shift, the convergence rate of DCL-KR is maintained, whereas that of IED worsens. We can also confirm that DCL-KR can address this distribution shift effectively with a large amount of public inputs. * Overall, our experiments validate the theoretical results of DCL-KR and demonstrate its superiority over previous results. > The language used in the paper is not concise. If allowed, we will make effort to revise the manuscript to ensure it is concise and clear. > The related work only gives a rough summary of the previous works. When the authors focus on the theoretical analysis, could the authors compare their theoretical bounds to other works? Indeed, the previous works similarly aim to achieve (nearly) minimax optimality (i.e., prove inequalities that are similar to (2) in our manuscript). Thus, there is no difference in the bound itself except for logarithm rates and prefactors. Note that the prefactors are not considered in the analysis of minimax optimality. In addition, $\log n$ grows slower than any polynomial. The main differences lie not in the theoretical bound itself but in the decentralized environment settings that guarantee the bound, as summarized in Table 1. Some prior works have limitations on the number of local parties. For example, DKRR-NY-CM assumes $m\leq O(n^{(2r+s-1)/(2r+s)})$ (with the same notation in our manuscript). In contrast, nFedAvg directly communicates local data and IED does not consider statistically heterogeneous cases. We elaborate on these points in lines 105 – 117 and lines 179 – 209. Our experimental results also demonstrate performance degradation of baselines when they are applied outside their assumed settings. > The paper presents a discussion of limitations, which is not convincing. For example, "there is no rigorous study that discusses the privacy preservation advantages of distillation-based collaborative learning". Actually, the standard technique for privacy is not distillation. As the reviewer pointed out, distillation is not a standard technique for privacy. However, as mentioned in Section 1, Distillation-based Collaborative Learning (DCL) algorithms have been proposed in the context of Federated Learning (FL) where local data privacy preservation is crucial. This is because, as mentioned in Lines 1383 – 1385, the communicated information is predictions on public data, which is expected to preserve local data privacy due to its black box nature. Initially, traditional FL algorithms that exploit parameter exchange also do not provide clear evidence for local data privacy concerns, but recent discussions address these issues extensively. In summary, we emphasize in Appendix D that DCL has the potential to preserve local privacy and that there is a need to study privacy concerns in DCL, similar to the research flow in traditional FL algorithms. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed explanation and response. After reading the whole page, I am willing to improve my rating based on trusting the authors to modify the paper. Thank you very much.
Summary: The authors propose a nonparametric version of FedMD, a distillation-based collaborative learning methodology. They also propose a neural network variant of the nonparametric approach as an extension for heterogeneous local neural network models. They provide both theoretical results on the nonparametric approach and experimental results demonstrating efficacy of the neural network extension. The theoretical results prove the nearly minimax optimality of this nonparametric collaborative learning algorithm in massively distributed statistically heterogeneous environments. Strengths: (1) The authors prove the (nearly) minimax optimality of a privacy-preserving collaborative learning algorithm using kernel regression in massively distributed, statistically heterogeneous environments. (2) Theorem 3.4 addresses a more general setting than prior works [48, 58]. Su et al. [58] only cover r = \frac{1}{2}​ of Assumption 3.3. Park et al. [48] do not consider Assumption 3.2, which provides a finer result. Furthermore, compared to [48], the required size of public inputs is reduced and the statistical homogeneity condition is dropped. (3) The rate n^{-\frac{r}{2r+s}} is the minimax lower rate under Assumptions 3.1, 3.2, and 3.3, making DCL-KR nearly optimal in a minimax sense. (4) On the difference between public and local distributions: Theorem 3.4 allows the public input distribution \tilde{\rho}_x to differ from the local input distribution \rho_x​. (5) Finally, The difference between \rho_x​ and \tilde{\rho}_x​ impacts the expected risk upper bound as a multiplication of B_r​, which can be removed by increasing public inputs. Weaknesses: (1) The need to perform learning rate scaling to ensure the impact of local iterations is consistent across models introduces additional complexity. The exact method for scaling may vary and might not be trivial to determine or implement. (2) The computation of Gram matrices for CKA involves O(p^2) operations, where p is the number of public data points. This quadratic complexity can become prohibitive as the size of the public dataset increases. (3) The method assumes that the ensemble kernel k = \sum_{i=1}^{m} \frac{n_i}{n} k_{f_i} will perform better than individual feature kernels. This assumption might not hold in cases where local models are highly heterogeneous, as the averaged kernel could lose important characteristics of individual models, leading to suboptimal performance. The authors empirically verify the superiority of the ensemble in the appendix. Technical Quality: 3 Clarity: 2 Questions for Authors: (1) How do you handle the computational burden associated with the calculation of Gram matrices for large datasets? (2) How do you handle cases where these representations before the last layers of the neural networks are suitable for kernel matching? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the detailed comments and hope to address all of the questions and concerns raised by the reviewer. > (1) The need to perform learning rate scaling to ensure the impact of local iterations is consistent across models introduces additional complexity. The exact method for scaling may vary and might not be trivial to determine or implement. As the reviewer pointed out, we calculate and utilize the empirical CKA on the whole public inputs of local feature kernels for learning rate scaling. We use this approach for precise calculation, but we agree that it can be computationally intensive. Therefore, we additionally conduct experiments on RotatedMNIST by sampling a few public inputs and calculating the empirical CKA based on them. The experimental results are as follows: |GD, 50 data points used|GD, full data points used| |---|---| |$0.243\pm 0.006$|$0.243\pm 0.006$| We can observe that when using vanilla gradient descent, calculating empirical CKA with 50 public inputs is sufficient for learning rate scaling. When using Adam, we obtain the following results, also indicating that calculating empirical CKA with 50 samples is sufficient. |Adam, 50 data points used|Adam, full data points used| |---|---| |$0.227\pm 0.003$|$0.227\pm 0.003$| Consequently, calculating empirical CKA for learning rate scaling can be done with a small number of public inputs and then it does not require extensive computational resources. We will include these results in the manuscript. > (2) The computation of Gram matrices for CKA involves O(p^2) operations, where p is the number of public data points. This quadratic complexity can become prohibitive as the size of the public dataset increases. We think this is a good point. The kernel distillation procedure is a crucial component in enhancing performance by maintaining the theoretical assumptions of DCL-KR in the neural network setting, so calculating the gram matrix is unavoidable. However, we can apply some tricks to compute it more efficiently. For example, we can perform this calculation on the server to reduce the computational burden imposed on local parties. First, the local parties directly upload the raw features of public inputs (whose dimension is $n_0 \times d$ where $n_0$ is the number of public inputs and $d$ is the dimension of the local feature) to the server. Then the server can compute the feature kernels. Since this approach only requires the local party to perform forward propagation for the public inputs, it significantly reduces the computational cost on the local parties. Additionally, this method can also reduce communication costs when dealing with a large amount of public data. For reducing the overall computational burden (including the server), we believe that an additional study is needed and we will strive to address this issue in future work. > (3) The method assumes that the ensemble kernel k = \sum_{i=1}^{m} \frac{n_i}{n} k_{f_i} will perform better than individual feature kernels. This assumption might not hold in cases where local models are highly heterogeneous, as the averaged kernel could lose important characteristics of individual models, leading to suboptimal performance. The authors empirically verify the superiority of the ensemble in the appendix. In fact, the ensemble kernel theoretically has stronger expressivity than each individual local feature kernel, so with a sufficiently large amount of data, a better regressor will be trained when we use the ensemble kernel. We demonstrate this in Appendix B (lines 1142 - 1145). Therefore, with enough data, it is not possible for the ensemble kernel to perform worse than individual local feature kernels. However, fully distilling the ensemble kernel is challenging. Thus, there is a possibility of losing important characteristics (that the reviewer concerned with) during the kernel distillation process. Precisely, some important features may have a lower portion in the ensemble kernel and so they can be potentially missed during the distillation process. One possible solution is for each local party to construct local feature kernels by selecting important features and adjusting weights accordingly. > (1) How do you handle the computational burden associated with the calculation of Gram matrices for large datasets? Please refer to our responses above. > (2) How do you handle cases where these representations before the last layers of the neural networks are suitable for kernel matching? In our work, we assume that the local model is of the form $f(\cdot) = w^\top g(\cdot) + b$ and design our algorithm accordingly. All neural network architectures with this form (that is, the last layer is a fully connected layer) are suitable for kernel matching. Note that most modern neural networks have this structure. Therefore, in most cases, our algorithm is applicable. If other forms of neural networks need to be utilized, we anticipate that significant changes to the algorithm would be required. Fundamentally, DCL-NN is inspired by DCL-KR. DCL-KR is based on kernel regression (a type of linear model) and this nature plays a crucial role in its theoretical results. Exploring the ways to generalize the extension from DCL-KR to neural network settings would be a valuable direction for future work. --- Rebuttal Comment 1.1: Comment: Thanks for the response. This has cleared up most concerns for me. On this comment, >Therefore, with enough data, it is not possible for the ensemble kernel to perform worse than individual local feature kernels. However, fully distilling the ensemble kernel is challenging. Thus, there is a possibility of losing important characteristics (that the reviewer concerned with) during the kernel distillation process. Precisely, some important features may have a lower portion in the ensemble kernel and so they can be potentially missed during the distillation process. One possible solution is for each local party to construct local feature kernels by selecting important features and adjusting weights accordingly. This is particularly what I was aiming at. On re-reading my review, I realise I was not clear. I agree with enough data the ensembled kernel cannot be worse than the local kernels. I agree the problem here is the distillation process and that piece should be explored in future work. There is work in leveraging bochner's theorem for learning kernels from data in the gaussian process literature. Perhaps these approaches could provide more optimal kernels at the local level. I've updated my score given this response.
Summary: The paper investigates the theoretical underpinnings and practical implementation of distillation-based collaborative learning (DCL) from a kernel regression perspective. The authors propose DCL-KR, a nonparametric version of the FedMD algorithm, which achieves nearly minimax optimal convergence rates in massively distributed and statistically heterogeneous environments. Building on these theoretical insights, the authors introduce DCL-NN, a practical DCL algorithm designed for neural networks, which incorporates feature kernel matching to align local models. Extensive experiments on various regression tasks demonstrate the superiority of DCL-KR and DCL-NN over existing methods. Strengths: 1. The paper provides a rigorous theoretical analysis of DCL-KR, proving its nearly minimax optimality in distributed and heterogeneous settings. 2. Experiments are comprehensive. Six datasets are included. 3. The improvement of DCL-NN is significant over the other baselines in some datasets. Weaknesses: 1. The paper lacks a discussion on the privacy risk and communication efficiency of DCL-KR and DCL-NN. 2. The paper is based on distillation-based collaborative learning method that requires public data, which limits the applications on private domains. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the choice of kernel affect the performance/theoretical guarantee of the proposed approach? 2. Is transferring predictions a common practice in collaborative learning for regression? If not, how does it compare with the other approaches in terms of privacy and efficiency? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the detailed comments and hope to address all of the questions and concerns raised by the reviewer. > The paper lacks a discussion on the privacy risk and communication efficiency of DCL-KR and DCL-NN. Basically, our work shares the typical (widely used) assumption of distillation-based federated learning (that the collaborative distillation method, interacting through the predictions on public inputs, can preserve privacy due to its black-box nature). However, as explained in Appendix D, privacy concerns for DCL algorithms have not been sufficiently explored. This issue is not straightforward because it requires fundamental analysis on privacy preservation of functional information transfer through knowledge distillation. This topic seems to be a very interesting topic, but it appears to be beyond the scope of our work and left for a further study. Regarding communication efficiency, both DCL-KR and kernel regression-based baselines require a communication cost of $O(n_0)$ per communication round (except the download of public inputs) where $n_0$ is the number of public inputs. DCL-NN communicates the gram matrix (with communication cost $O(n_0^2)$) and requires $O(n_0)$ communication cost per communication round in collaborative learning phase. Compared with FedMD and KT-pFL, DCL-NN has higher communication costs due to transmitting the gram matrix. Since FedHeNN also uses kernel matching but performs it in batches for each communication round, in cases that many communication rounds are needed, DCL-NN is more efficient than FedHeNN. However, when we use DCL algorithms for pretraining of DCL-NN, more communication cost is required for pretraining. Therefore, there is a trade-off between performance and communication cost. To reduce the communication cost of DCL-NN, we can use the following idea: If the feature dimension $d$ of the local model is smaller than $n_0$ (# of public inputs), a direct transmission of the raw feature (whose dimension $n_0 \times d$) or the decomposition of the gram matrix can reduce the communication cost. > The paper is based on distillation-based collaborative learning method that requires public data, which limits the applications on private domains. As the reviewer pointed out, there may be cases where it is difficult to collect public data. However, as explained in Section 1, we would like to emphasize that our problem involves prohibiting the direct exchange of not only local data but also model information. These restrictions necessitate an additional information sharing medium. In this context, our theoretical findings highlight a strength of our algorithm. According to Theorem 3.4 and Corollary A.7, as long as the public data distribution covers the support of the local data distribution, theoretical performance guarantees are provided. Moreover, when the public data has a different distribution from the local data but there is a large amount of public inputs, it can compensate for the distribution gap and ultimately achieve the same performance as central training. > How does the choice of kernel affect the performance/theoretical guarantee of the proposed approach? In our theoretical analysis, the kernel is related to the quantity $r$ and $s$ when the target function is given. These quantities determine the minimax lower rate $O(n^{–r/(2r+s)})$ in Theorem 3.4, meaning that the convergence rate varies depending on the choice of kernel. Hence, the faster eigenvalue decay of the kernel (smaller $s$) and the better regularity of the target function (larger $r$) in the RKHS induced by this kernel (which means the RKHS represents the target function well in some sense) give a better performance (faster convergence rate). We can also see that a good choice of the kernel reduces the number of the required public data $(n^{1/(2r+s)}log^3 n)$ in Theorem 3.4. > Is transferring predictions a common practice in collaborative learning for regression? If not, how does it compare with the other approaches in terms of privacy and efficiency? In terms of not directly sharing local data, Federated Learning (FL) with parameter exchange can be considered as a similar approach to Distillation-based Collaborative Learning (DCL). However, to the best of our knowledge, in scenarios where neither model nor local data can be directly shared, DCL methods are the only option. As stated in Section 1, our setting is distinguished from traditional FL (that exploits parameter exchange) by the inability to directly share the local model. Therefore, a direct comparison between DCL and traditional FL may not be appropriate in general. Of course, putting these points aside, we can simply compare DCL with traditional FL in terms of privacy and efficiency. As noted in Appendix D of our manuscript, there is an intuition that DCL may be better than parameter exchange-based FL in terms of privacy. In more detail, parameter exchange has a white-box nature in which the internal information of the model is shared, while the predictions of public data have a black-box nature because the structure of each local model is not known externally. However, we acknowledge that more rigorous discussions on privacy for DCL is required as mentioned earlier. From an efficiency standpoint, a simple comparison of communication cost is possible. When using large models, DCL-NN (that communicates public inputs and gram matrix only once and shares predictions in each communication round) can be more efficient than communicating model parameters in each communication round. However, DCL algorithms still exhibit worse performance compared with traditional FL in general. This is because DCL addresses a more challenging problem. We believe that DCL needs to improve its effectiveness before focusing on efficiency improvements, and we expect our work to play a significant role in this aspect. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. The response has addressed my questions and I'll keep my positive score.
Summary: This paper theoretically proves a nonparametric version of the most standard distillation based collaborative learning algorithm (named DCL-KR) is nearly minimax optimal in massively distributed statistically heterogeneous environments. Extensive experiments demonstrate their theoretical results and show the practical feasibility of DCL-NN. Strengths: 1. The paper is well-written. 2. The authors conducted extensive experiments to verify the effectiveness of the proposed method. Weaknesses: Please see Limitations Technical Quality: 2 Clarity: 2 Questions for Authors: Please see Limitations Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: This is not my field of study and I can only give a Borderline Reject. I will adjust my score based on the scores of other more specialised judges. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the detailed comments. We briefly describe the main contribution of our work as below: * Applying kernel regression theory, we analyze the most representative DCL algorithm, FedMD, from a non-parametric perspective (called DCL-KR). Compared with the existing studies, our work is the first to theoretically demonstrate the effectiveness of DCL algorithm on statistically heterogeneous and massively distributed environments. * Based on the theoretical results, we propose a novel neural network-based DCL algorithm (called DCL-NN) using kernel matching. * Through experiments, we validate the theoretical results and demonstrate the superiority of our algorithms (DCL-KR and DCL-NN) by comparing its practical performance with baselines. Kindly let me know if you require any further clarification or have additional comments. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. I will take into account the feedback from other reviewers to adjust my scoring subsequently. --- Rebuttal 2: Comment: After reading feedback from other reviewers, I am willing to improve my rating based on trusting the authors to modify the paper.
Rebuttal 1: Rebuttal: We appreciate all reviewers for the detailed comments. In this general response, we provide additional explanations to help the reviewers better understand our work. In detail, we (1) briefly outline the overall flow of our manuscript, (2) provide additional explanations on the connection between DCL-KR and DCL-NN, and (3) provide additional explanations on Assumptions introduced in Section 3. If permitted, we will include them in the revised version. **Overall Flow of Our Manuscript** As explained in Section 1 of our paper, we first provide a theoretical foundation for a nonparametric version of FedMD [1] (called DCL-KR) and use this result to design its neural network version (called DCL-NN). In detail, we first obtain the nearly minimax optimality of DCL-KR through kernel regression theory. This implies that DCL-KR with sufficiently large public inputs has an almost same convergence rate as minimax optimal central training. (Section 3) While this theoretical analysis represents significant improvements compared to previous results (Table 1, lines 179 – 209), it does not fully explain the performance guarantee of neural network-based DCL. So, we design a new DCL algorithm (DCL-NN) inspired by the theory of DCL-KR (lines 50-56). [1] D. Li and J. Wang. Fedmd: Heterogeneous federated learning via model distillation. arXiv preprint arXiv:1910.03581, 2019. **Additional Explanation on Connection between DCL-KR and DCL-NN** Indeed, we explicitly state that the equality of local kernels contributes the successful analysis of DCL-KR (lines 215-218) and DCL-NN is a natural extension of DCL-KR in this view. Here, we provide a detailed explanation on this point. For simplicity, consider the case of $E=1$ in DCL-KR where $E$ is the number of local iterations. After the consensus prediction $u$ is distributed to local parties, the server receives the updated local prediction on $Z$ : $(I-\frac{\eta}{n_i} K_{ZX_i}K_{X_i\tilde{Z}}K_{\tilde{Z}\tilde{Z}}^{-1})u + \frac{\eta}{n_i} K_{ZX_i}y_i$ from the $i$-th local party. (The notation is consistent with the main body and Appendix A.2.1) Suppose the data at the $i$-th local party and the $j$-th local party are exactly the same. If the same kernel is used in these two parties, the updated local predictions will be identical. However, if the kernels of the two parties are different, this will not be the case. For kernels like the Gaussian kernel, which has high correlation between close inputs, the updated local prediction will be strongly influenced more by data points close to each input. We can observe this fact from the above formula. On the other hand, for kernels like the linear kernel, which has high correlation between distant inputs, the updated local prediction on $Z$ will be influenced more by data points farther from each input. This observation implies that aggregating local learning information becomes very challenging when the kernels differ. In summary, using the same kernel ensures that the shift mechanisms of predictions on $Z$ at edges are identical, which makes it possible for the aggregation through simple weighted averaging to work well. This is a key of the strong theoretical results of DCL-KR. Therefore, DCL-KR and DCL-NN are deeply connected. **Additional Explanation on Assumptions in Section 3** Note that, as mentioned in lines 160 – 162, Assumption 3.1, 3.2, and 3.3 are commonly used when deriving the upper rate of expected generalization error. In recent decades, these assumptions have been used as the standard setting in kernel regression analysis. Relevant literature [2, 3] often introduces these assumptions without detailed explanations. Nonetheless, we acknowledge that these assumptions might seem unfriendly to those unfamiliar with this research context. Therefore, we provide an explanation of these assumptions here. Basically, as noted in line 160-162, Assumption 3.1 is about the regularity of noise, Assumption 3.2 is about the regularity of the kernel, and Assumption 3.3 is about the regularity of the target function. These assumptions influence the minimax lower rate [4]. In detail, * Assumption 3.1 implies that the noise is not excessively large. In fact, noise with Bernstein condition satisfies Assumption 3.1. For instance, Gaussian/sub-Gaussian noises and bounded noises satisfy Assumption 3.1. Therefore, this assumption is a very general noise condition that encompasses a wide range of cases. * Assumption 3.2 is about the eigenvalue decay of the kernel. This is a crucial factor when studying the behavior and properties of kernel. For example, from this assumption, one can derive bounds on the effective dimension that is related to covering and entropy number conditions. * Assumption 3.3 is related to the regularity of the target function, specifically how well the RKHS induced by the kernel represents the target function. Under the above assumptions, the minimax lower rate is given by $O(n^{-r/(2r+s)})$. Thus, under these assumptions that precisely specify the minimax lower rate, our work analyzes whether DCL-KR has a similar upper rate. As a result, we demonstrate that DCL-KR is nearly minimax optimal and so we can conclude that DCL-KR has an almost same performance as the central kernel regresion model. Many prior works also study the minimax optimality of other kernel regression-based algorithms under similar assumptions (Lines 92-117, Table 1, Lines 179 – 209 in our manuscript). [2] Y. Li, H. Zhang, and Q. Lin. On the saturation effect of kernel ridge regression. In International Conference on Learning Representations, 2023. [3] S. Park, K. Hong, and G. Hwang. Towards understanding ensemble distillation in federated learning. In International Conference on Machine Learning, pages 27132-27187. PMLR, 2023. [4] A. Caponnetto and E. De Vito. Optimal rates for the regularized least-squares algorithms. Foundations of Computational Mathematics, 7:331-368, 2007.
NeurIPS_2024_submissions_huggingface
2,024
Summary: In this paper, the authors perform a study of distillation-based collaborative learning (DCL) in massively distributed statistically heterogeneous environments. In particular, the study focuses on analyzing DCL-KR, a non-parametric version of FedMD, and proves its near-minimax optimality. The authors proposed DCL-NN, a method that leverages kernel matching to implement DCL with heterogeneous neural networks. Experimental results support the theoretical analysis of DCL-KR and demonstrate the effectiveness of DCL-NN. Strengths: The paper provides a theoretical analysis of DCL-KR and proposes a practical algorithm for using neural networks in massively distributed statistically heterogeneous environments. The authors perform experiments with both synthetic and real-world datasets to support the theoretical results and evaluate the performance of DCL-NN. Weaknesses: The paper does not perform any formal privacy analysis. As such, I believe using terms like "privacy-preserving" can be misleading in this context. Technical Quality: 4 Clarity: 4 Questions for Authors: Could the authors avoid mentioning privacy as no formal privacy study is conducted? Confidence: 1 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The paper unnecessarily emphasizes privacy while admitting that privacy is not formally defined (cf. question). It would be nice to provide more discussion on this aspect or to present some empirical evidence of the claimed benefit (e.g., performing privacy attacks such as membership inference) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the detailed comments and hope to address all of the questions and concerns raised by the reviewer. > The paper does not perform any formal privacy analysis. As such, I believe using terms like "privacy-preserving" can be misleading in this context. > Could the authors avoid mentioning privacy as no formal privacy study is conducted? In our manuscript, we use the term “privacy-preserving” to describe distillation-based collaborative learning (DCL) in the sense that it exchanges the predictions on public data (functional information) rather than directly exchanging local data. As we mentioned in lines 1383 - 1385, we believe that this black-box nature allows DCL to protect local data privacy. In a similar vein, as mentioned in Section 1 and 2, DCL has been studied mainly in the context of Federated Learning (FL) that aims to preserve local data privacy on decentralized learning. Therefore, we can see that previous studies have also been conducted based on this intuitive (though not rigorous) reasoning. However, as the reviewer pointed out (and as mentioned in Appendix D of our manuscript), there has not been a rigorous study addressing the privacy concerns of DCL. Fundamentally, we agree that it is necessary to discuss comprehensively the privacy preservation advantage of functional information transfer through knowledge distillation. This seems to be a very interesting topic, but it is beyond the scope of our work and left for a further study. In summary, our work shares the belief widely assumed in the previous DCL works (particularly in the context of FL), but we understand the reviewer’s concern about using terms like “privacy-preserving” without rigorous discussion. We will strive to soften these expressions in future versions to ensure that readers understand accurately. --- Rebuttal 2: Title: Thank you for the responses Comment: Thank you for the responses. I will keep my positive score.
null
null
null
null
null
null
Dealing with Synthetic Data Contamination in Online Continual Learning
Accept (poster)
Summary: This paper investigates the impact of AI-generated images on the performance of online continual learning (CL) models. It introduces a novel method called Entropy Selection with Real-synthetic similarity Maximization (ESRM) to mitigate the negative effects of synthetic data contamination. ESRM leverages entropy-based sample selection and a contrastive learning approach to align the feature embeddings of real and synthetic data, thereby enhancing the robustness of CL models against the degradation caused by synthetic data. Strengths: - The authors clearly articulate the problem, methodology, and results, enhancing the paper's accessibility and understanding for a broad readership. - The paper uniquely identifies the issue of synthetic data contamination in online continual learning, a significant challenge for the future of this field. The work has substantial implications for the ML community, offering a pioneering approach to maintaining the integrity of continual learning models in the presence of synthetic data. - This paper proposes ESRM, an innovative method that combines entropy selection and contrastive learning to mitigate the negative effects of synthetic data, demonstrating creativity in addressing this new problem. - The paper is underpinned by robust technical approaches, ensuring the quality and reliability of the proposed solution through comprehensive experimental validation. Weaknesses: - The paper evaluates the impact of synthetic data using a limited set of generative models. And the experiments are primarily conducted on image classification datasets. Expanding this to include a broader range of tasks, particularly truly text-to-images, text-to-text, could strengthen the findings. - The method for generating synthetic datasets is straightforward, using simple prompts. Incorporating more complex and diverse prompts, potentially using large language models to simulate user queries, could better reflect real-world synthetic data contamination. - The reliance on entropy as a key metric for distinguishing real from synthetic data might not be universally applicable. Further exploration of this metric's effectiveness across different domains and its theoretical underpinnings could bolster the method's credibility. - While the paper provides insights into the method's effectiveness, there is limited discussion on its computational efficiency and scalability, especially when dealing with large-scale datasets or high contamination ratios. Addressing these aspects could be crucial for practical applications where computational resources and time are critical constraints. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and valuable comments. ## Weaknesses 1. We indeed tested our method with a limited set of generative models, due to the computation constraint. Also, it is true that almost all of the existing work in online Continual Learning (CL) is limited to Class-Incremental Learning (CIL) scenarios, where the major task is image classification. We fully agree the scope of the tasks can be beneficial, which we take as an important direction for future work. 2. Following the suggestion, we conducted extra rebuttal experiments as shown in the general rebuttal (1). We use LLaMA to generate more diverse and complex prompts to simulate a more realistic contamination scenario. The experimental result in Table I in the rebuttal material validates the effectiveness of our method when the synthetic images are generated with LLM-enhanced prompts. 3. The entropy criteria is indeed important in our work, and we would like to prove the universality of the entropy metric both empirically and theoretically. Empirically, we expand the experiments with broader settings. The Domain-Incremental Learning setting and LLM enhancement setting in Table I of the rebuttal material further proved the universality of ESRM. Theoretically, classifiers trained on a limited diversity dataset often overfit, resulting in confident predictions on the training data but poor generalization to new data. Such a pattern is universal and not limited to synthetic contamination. Methods like Label Smoothing [4] and Confidence Penalty [5] are proposed to alleviate such problems. In our paper, this limited diversity issue is magnified by synthetic contamination, where the generative model's output diversity remains imperfect for current AI research. With limited diversity, synthetic data are easier for the model's feature extractor to cluster (cf. Fig. 3 in the paper) and for the classifier to classify, leading to more confident (lower entropy) predictions (cf. Fig. 2 in our paper). We will include such experiments and theoretical discussions to further improve our manuscript. 4. We regard computation efficiency and scalability as crucial criteria for our method, especially in online continual learning scenarios. Thus, we included the training time on the CIFAR-100 dataset in Sec. D.7 and Fig. 11 in the appendix. Following the suggestion, to show how the computation of different baselines is scaled with larger datasets, in general rebuttal (4), we include the training time of all baselines on TinyImageNet and ImageNet-100 datasets. As shown in Fig. II of the rebuttal material, the training time of our method is significantly faster than some state-of-the-art methods like OCM and OnPro, while it is on par with the most efficient method. Also, we would like to clarify that the training computation is agnostic with contamination ratio, because in the generation of the contaminated dataset, we *replace* the real images with synthetic images while not changing the size of the dataset. Reference: [4] "Rethinking the inception architecture for computer vision." CVPR 2016 [5] "Regularizing neural networks by penalizing confident output distributions." arXiv:1701.06548 (2017) --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: Thank you for the rebuttal. This paper is very insteresting and promising. After considering the comments from other reviewers and the rebuttal, I decided to raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and valuable comments.
Summary: Image generation has been showing promising achievements in the last few years, but the generative models may not be able to keep up with the distribution of the real samples. Due to low diversity, synthetic images perform poorly in downstream learning tasks. This paper tackles this problem by diversifying the sample selection using the proposed ESRM framework, which prioritizes the high-entropy samples in the experience replay memory with an RM loss function. The authors first performed experimental analysis to build their motivation in section 4 and proposed their solution to the problem in section 5. The experiments are diversified with several datasets, benchmarks, and extensive ablation studies. Strengths: 1. The paper is nicely written, with clear language and sufficient details. 3. Section 4's experimental analysis of existing CL methods clearly motivates the proposed model. 4. The authors propose a novel loss function to address the issue of low-entropy samples in CL. This approach builds upon existing supervised constructive loss techniques. Their method aims to maximize the cosine similarity between embeddings of real and synthetic samples while simultaneously minimizing the impact of low-entropy samples on the learning process. 4. In the ablation study, the authors studied different components of the proposed framework that show the superiority of the overall ESRM. Weaknesses: 1. The paper only deals with diffusion generative models. I understand that diffusion models are the most powerful image generators but it would be good to experiment other generative models. 2. The accuracy scores are overall low in CL. Does the proposed method 3. Why is a fixed 50% dropping rate selected for ES? 4. Downstream ML tasks have been explored with synthetic data to validate synthetic datasets, and related works clearly miss that angle. The authors should also discuss them and maybe add some of them to benchmarks. Technical Quality: 3 Clarity: 4 Questions for Authors: Authors can refer to the weaknesses section for major questions. Here, I listed minor points for authors reference: 1. It seems there is a typo in the line 231. 2. It could be good to highlight the best results in Tables 3 and 4, the same as Table 2. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: I agree with the limitation the authors brought up. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and valuable comments. ## Weaknesses 1. In our experiments, we prioritize the importance of diffusion-based contamination. There are two reasons: Firstly, training with synthetic images generated with GAN-based or autoencoder-based methods yields catastrophic performance, especially when the contamination is severe. Another reason is that we find GAN-based or autoencoder-based synthetic images are relatively easy to distinguish from their real counterparts, which hinders them from constituting a de facto “contamination”. For example, in our preliminary experiments, we tried to use GALIP [1], a state-of-the-art text-to-image GAN-based generative method to generate images with our text prompt, as shown in the general rebuttal (6). Fig. III in the rebuttal material shows the GALIP is subject to an extremely limited diversity problem. While we prioritize diffusion-based contamination, we regard using LLM-enhanced prompts to diversify GALIP-generated images as an important future direction. 2. In this work, we focus on the online Continual Learning (CL) setting specifically, where the synthetic contamination is a more factual and severe issue because the online setting restricts us from assessing the quality of the training data beforehand. Unfortunately, due to the online restrictions, there is still a salient performance gap between the performance of conventional CL and the state-of-the-art online CL. Also, it is noteworthy that ESRM could achieve a clean performance that is on par with the current state-of-the-art methods (OCM, GSA, and OnPro). We hope our research sheds light on the research field and helps narrow down the gap. 3. Intuitively and factually, the dropping rate in ES is a hyperparameter that is affected by the contamination ratio of the training dataset. There are two reasons to choose a fixed (50%) dropping rate. **(a)** We assume the continual learner does not have prior information about the contamination ratio of the dataset, which is more realistic in the real world. This is also the reason why we do not and should not perform a hyperparameter search for the dropping rate for each contamination ratio. **(b)** A drop rate of 50% gives competitive performance even under extreme conditions (Contamination ratio = 95%). We believe that the obtained performances demonstrate that a fixed ES dropping rate is resilient to different conditions (dataset, contamination ratio). 4. There is indeed excellent work on validating synthetic data, for example, UniFD [2], FatFormer [3], etc. We will include such discussions in the related work section in the revised manuscript. Experimentally, we replaced the ESRM's entropy selection strategy with the pretrained UniFD synthetic data detector, as illustrated in the general rebuttal (5). With our test, the accuracy of the UniFD detector is 66.19% on the C100/SDXL dataset. Table II in the rebuttal material shows that UniFD is more effective than random selection, but its performance is still limited. This is due to a distribution mismatch between our dataset and the dataset used to train the detectors. Moreover, due to the online constraint, we had to set the threshold of UniFD at 0.5, because we can not search the optimal parameter without the knowledge of the dataset distribution. ## Questions &emsp; Thank you for your suggestions, we will carefully revise the manuscript and update the tables for a better readability of our paper. Reference: [1] "GALIP: Generative adversarial clips for text-to-image synthesis." CVPR 2023 [2] "Towards universal fake image detectors that generalize across generative models." CVPR 2023 [3] "Forgery-aware adaptive transformer for generalizable synthetic image detection." CVPR 2024
Summary: This paper investigates the negative impact of synthetic data contamination on existing online continual learning methods. An entropy selection with the real-synthetic similarity maximization method is proposed to alleviate the performance deterioration. Strengths: 1. Detailed analysis of synthetic data contamination and its influence on continual learning. 2. This paper is technically clear and easy to follow. Weaknesses: 1. The creation of simulated data is the cornerstone of this research, but there is a lack of detailed explanation on how to generate these data in the main paper. 2. For Observation 4 "With the limited diversity of synthetic data", how about mixing the synthetic data from different generation models to increase the diversity, since the synthetic images on the internet also form different models? While the observation remains unchanged in this case? 3. In order to simulate real data collection more realistically, in addition to synthetic data, new open-domain real data should also be incorporated. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are discussed while the potential negative societal impact is not discussed. However, for this work, I think it is not necessary to discuss this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and valuable comments. ## Weaknesses 1. We would like to include more detailed information on the synthetic dataset generation process. For Stable Diffusion and VQDM, we use source code and model snapshots from huggingface, as mentioned in Table 12 in the appendix. For Glide experiments, we use the official implementation and the released model snapshots. Following the recommendation, we use the refiner in Stable Diffusion XL and the upsampler in GLIDE. The diffusion steps and guidance scale hyperparameters we use are as follows: | Generative Model | Diffusion Steps | Upsample(Refiner) Steps | Guidance Scale | |:-----------------:|:-----------------:|:----------------:|:----------------:| | SD1.4 | 50 | N/A | 7.5 | | SD2.1 | 50 | N/A | 7.5 | | SDXL | 40 | 40 | 5.0 | | VQDM | 100 | N/A | 7.5 | | GLIDE | 100 | 27 | 3.0 | For other hyperparameters, we follow the recommendations from Huggingface and GLIDE's official implementation. We use the prompt "An image of a {class_name}." as the text guidance to generate the image and interpolate the generated image to the size of the target dataset (32 for CIFAR, 64 for TinyImageNet, and 224 for ImageNet-100). We will include such detailed information in the revised manuscript to help the reproducibility of our paper. Also, for reproducibility, we will include the source code for synthetic dataset generation in our project codebase and make it publicly available. 2. As suggested in Sec. 3.1, we have two different strategies for simulating synthetic data contamination: **(a)** using data generated from SDXL only and **(b)** mixing the data generated from different generative models. In Table 3 of our paper, We have included the t-SNE visualization result on **(a)** setting to ground the observation 4. For **(b)** setting, besides the final average accuracy shown in Table 7 in the appendix, we include extra t-SNE visualization results in the rebuttal material, as introduced in the general rebuttal (3). The t-SNE visualization of feature representations given in Fig. I(a) of the rebuttal material shows that even when generating synthetic data with a mixture of generation models, a gap still exists in feature space between generated and non-generated data when training with ER. Such experiments additionally support the claims of Observation 4. 3. Following the suggestion, we extended our experiments from Class-Incremental Learning (CIL) only to include Domain-Incremental Learning (DIL) experiments on the DIL-CIFAR20 dataset. As mentioned in the general rebuttal (2), the DIL results of ESRM outperform other baselines, showing the robustness of ESRM against synthetic contamination. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: The authors have addressed most of my concerns. I decide to keep my initial positive rating. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and insightful comments.
null
null
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their comments and suggestions, which enabled us to improve the manuscript significantly. We respond to each reviewer individually. Here, we introduce the rebuttal experiments in the attached PDF file. ## 1. Results with LLM Enhanced Prompts As suggested by reviewer h2ny, we expand the synthetic dataset generation from the baseline prompt ("An image of {class_name}") to the LLM-enhanced prompt. We leverage the open-source instruction-fine-tuned LLaMA-3 model with 8 billion parameters (`meta-llama/Meta-Llama-3-8B-Instruct`) to generate the enhanced prompt for CIFAR-100 dataset. For each class, 50 prompts are generated, and we use Stable Diffusion XL to generate 10 images per prompt to formulate the LLM-enhanced CIFAR-100 dataset (denoted as C100/SDXL-LLaMA). Some examples of LLM-generated prompts are as follows: 1. A juicy red oval apple with a tiny worm hole is sitting on the wooden kitchen counter. 2. A vibrant orange tropical aquarium fish with iridescent scales swims lazily in a large glass tank on the sunlit windowsill. As shown in Table I in the rebuttal material, for all baseline methods, when the synthetic dataset is generated with LLM-enhanced prompts, the performance deterioration of synthetic contamination is still significant. Even with such advanced prompts, ESRM can significantly alleviate the performance deterioration while achieving satisfactory results. Notably, LLM-enhanced results (C100/SDXL-LLaMA in Table I of the rebuttal material) sometimes bring performance drops compared with non-LLM-enhanced results (C100/SDXL in Table 2 of our paper). This is because the LLM might introduce undesired items into the language prompt. ## 2. Domain-Incremental Learning (DIL) results While our current experiment mainly focuses on the Class-Incremental Learning (CIL) setting of online Continual Learning, we include extra experiments in Domain-Incremental Learning (DIL) scenarios, following the suggestion from reviewers ttuB and h2ny. We conducted the experiment with the 20 coarse labels of the CIFAR-100 dataset. Since the 100 classes in CIFAR-100 are grouped into 20 superclasses with 5 fine-grained classes for each superclass, we split the CIFAR-100 dataset with 5 domain increment steps. For each step, we feed the model with the training data of a fine-grained class for each superclass. We refer to this dataset as DIL-CIFAR20, as the model only classifies coarse labels. Similar to the simulated CIFAR100/SDXL dataset, we replace the images in the DIL-CIFAR20 dataset with its Stable Diffusion XL generated counterpart with a contamination ratio P, as per the protocol in Sec. 3.2. Table I of the rebuttal material shows the final average accuracy with different contamination ratios. Notably, we adapted the CIL-specifically designed components in OnPro and GSA to the DIL scenario, and the performance suffered a decent loss. We did not report ERACE results because its Asymmetric Cross Entropy (ACE) loss converges to standard cross-entropy loss in the DIL scenario, making it equivalent to vanilla ER. The experimental results show that ESRM can yield robust performance against domain shift in the DIL setting, under different synthetic contamination situations, which validates the efficiency of ESRM under DIL settings. ## 3. Visualization on C100/Mix dataset As suggested by reviewer ttuB, to better ground the claim in observation 4, we included extra visualization experiments on the C100/Mix dataset, where the synthetic data are generated from the mixture of different generative models. From the t-SNE visualization result in Fig. I of the rebuttal material, we can see that similar to Fig. 3 in the paper, the feature gap of the baseline method (ER) in the embedding space still holds even when the synthetic data is generated with different generative models. Also, with ESRM, the feature misalignment problem is alleviated. ## 4. Training time on other datasets We have shown the training time on the CIFAR-100 dataset in Fig. 11 in the appendix. As suggested by reviewer h2ny, to show how different methods scale larger datasets, we include the training time of all baselines on TinyImageNet and ImageNet-100 datasets. As shown in Fig. II in the attached material, we plot the training time on the logarithm scale for better readability. The computation efficiency of ESRM is always on par with the most efficient method (ER), while significantly outperforming OCM and OnPro. ## 5. Benchmarking Pre-trained synthetic image detectors As suggested by reviewer pFAa, we replace ES in our memory strategy with UniFD [2], a pre-trained synthetic image detector. We use the prediction of UniFD to validate the synthetic status (i.e., whether the image is real or synthetic) of the training images, and store the real images in the memory buffer. As shown in Table II of the rebuttal material, UniFD outperforms random selection but still has limited effectiveness despite leveraging extra-information from pre-training. One explanation is that it's not designed for online learning scenarios. Due to the online constraint, we set the threshold of UniFD to 0.5, instead of searching for the best threshold parameter (because we do not know the dataset distribution). Also, the domain gap between the UNiFD training set and our dataset further limits the performance. ## 6. Image Generation Results with GANs We include the image generation result with GALIP [1], a state-of-the-art text-to-image GAN model. Following our image generation protocol, we use the prompt "An image of {class_name}" to generate synthetic images. We use the GALIP model pretrained on CC12M dataset. As shown in Fig. III in the attached material, the generation result of GALIP suffers from a huge lack of diversity problem. Reference: [1] "GALIP: Generative adversarial clips for text-to-image synthesis." CVPR 2023 [2] "Towards universal fake image detectors that generalize across generative models." CVPR 2023 Pdf: /pdf/503a7dc3d75db44b002cdf5623f58b0a15cbb2c1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Secretary Problem with Predicted Additive Gap
Accept (poster)
Summary: The paper examines the value maximization secretary problem, where values $w_1 \geq \ldots \geq w_n$ are observed in a uniformly random order, and investigates how the optimal competitive ratio of $1/e$ can be improved in a learning-augmented setting with predictions about an additive gap. Specifically, the authors demonstrate that, given the value of any additive gap $c_k = w_1 - w_k$, even with $k$ unknown, the competitive ratio can be improved to $0.4$. Furthermore, if $k$ is known along with $c_k$, an even better competitive ratio, dependent on $k$, can be achieved. When only a prediction of $c_k$ is provided, the authors adapt the algorithm to ensure robustness against potentially erroneous predictions. Finally, they conduct simulations that confirm their theoretical findings and validate the effectiveness of the proposed algorithms in practice. Strengths: * The paper is well-written, with a good balance between technical results and intuitive explanations of the problem. * The theoretical results are interesting * The new type of advice presented is well-motivated from both theoretical and practical perspectives. * The appendices present interesting follow-up research directions Weaknesses: * The paper does not provide any tightness results proving the optimality of the presented algorithms. * The bounds that depend on the error in Section 5 assume that the algorithm knows the prediction error is at most $\epsilon$. The results would be stronger if the performance of the algorithms, without any information on the prediction quality, could be expressed as a function of the prediction error. This is a more typical criterion sought in learning-augmented algorithms: smoothness. Minor Weaknesses: * A major argument justifying the study of additive gaps is to explore weaker types of advice. In that sense, the paper maybe should cite previous works exploring weak types of advice. For example, "Online Search With a Hint" studies different types of advice and provides Pareto-optimal algorithms for each. "Paging with Succinct Predictions" examines advice of limited size. Other papers consider a limited number of predictions in problems with many unknown variables: "Parsimonious Learning-Augmented Caching", "Advice Querying under Budget Constraint for Online Algorithms", "Non-clairvoyant Scheduling with Partial Predictions", ... * In figures, I suggest using different line styles (for colorblind readers, black and white prints, etc.) and saving them in PDF format for better quality. Technical Quality: 4 Clarity: 3 Questions for Authors: See the weaknesses. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The assumptions of the theorems are clearly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Concerning tightness of our results: the mentioned implication of the two-best secretary yields an upper bound on $0.5736$ for exact additive gaps. In addition, we will of course address your minor weaknesses in the final version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and for pointing the upper bound.
Summary: This paper studies the famous secretary problem under the setting with predictions: some extra advice (presumably generated by some machine learning algorithm) that enables the algorithm to do well when this advice is accurate (consistency), but will not force the algorithm to do poorly if it is inaccurate (robustness). There have been quite a few previous papers on the secretary problem with predictions, which have studied a variety of different predictions and settings. However, essentially all of these previous papers have used "strong" predictions, e.g., some extra information about every secretary. This paper asks a slightly different question: what is the "weakest" piece of information that still allows us to do better than the classical $1/e$ bound? They propose the "predicted additive gap": the difference between the largest weight and the $k$'th largest weight for some $k$. They give a few results about this prediction: - If we are given such a gap, then even if we do not know $k$, we can get competitive ratio at least $0.4$ (notably better than $1/e$). - If we are given an incorrect gap but also an upper bound $\epsilon$ on its (additive) distance from a true gap, then we can get $0.4 OPT - 2\epsilon$ in expectation. - If we are given an incorrect gap and do not have a bound on its error, we can design an algorithm that gets $1/e + O(1)$ if the gap is correct and $\Theta(1)$ if it is not. Strengths: Overall, I like this paper and think that it should be accepted, although it has a few weaknesses. Its strengths include the following. - The question of "are there very weak pieces of information that still allow for nontrivial improvement?" seems quite interesting to me and pretty novel. There is a line of work on "advice complexity" for a variety of problems, which asks for the minimum number of bits necessary to get useful information, but "# bits" is not always the right metric for strong or weak. For example, knowing precisely the max weight is a small number of bits, but is a much stronger piece of information that an additive gap. I completely agree with the authors that an additive gap seems like a weak piece of information, and it is somewhat surprising (though not hard in retrospect) that an additive gap is sufficient to do better than $1/e$. - The algorithms are reasonably natural, which is always a plus. - The secretary is so fundamental, and the predictions setting is so popular, that anything interesting about secretaries with predictions is a good contribution. Weaknesses: - I am somewhat unimpressed by the "consistency vs robustness" phrasing. In particular, it is extremely binary: either the prediction is perfectly accurate or it is not. Instead, the more modern way of analyzing algorithms with predictions is to analyze the quality of the algorithm as a function of the prediction error. So then "robustness" would just correspond to the behavior when the error tends to infinity. This paper only provides such an analysis when we know an upper bound on the error, and then use that upper bound in the algorithm. It would have been much more interesting (and stronger) to give an algorithm which does not have such an upper bound, but where the behavior is still a function of the error. - While the motivation for this paper is to give the weakest prediction that is useful, when doing algorithms with predictions there is usually some discussion of why the prediction is "reasonable". Why is it reasonable to think that we might know (or have a good guess for) the additive gap? I don't see any discussion about this in the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: - Do any of your algorithms have bounds that are a function of the (unknown) error? - Why is it reasonable to think that we might have a good prediction for the additive gap? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This is fine. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The additive gap is a useful piece of advice when we are concerned about data privacy; for example, when we only have access to a translation of previous instances: instead of seeing $w_i$, we see $w_i - X$ where $X$ is some random shift of all the values. In this way, the maximum weight element is hidden; however, we present an algorithm which can still use this data. More generally, algorithms which are still able to perform with weak pieces of advice have the advantage of being amenable to obfuscated data. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I like this paper and will keep my score at 7. I'm not sure that I buy the "obfuscated data" argument, since it seems to me that for most natural notions of privacy (e.g., differential privacy), the additive gap would not be preserved (although it might be preserved approximately) and, more importantly, there are other things one could learn which would be better. But I fundamentally like the question of "what are extremely weak predictions that still allow for improvement", so I think this paper should be accepted.
Summary: This work considers the classical secretary problem in an online decision making with hints setting, where the objective is to select the element with highest weight from an online, random-order stream of elements with adversarially chosen weights, where the decision to select or reject elements is irrevocable. This is a well studied problem, and there is a simple threshold based algorithm that achieves a tight competitive ratio of $1/e$. This paper considers this problem in a learning with hints setting, where the learner is given access to an additional piece of information, which in this paper is assumed to be the gap between the weight of the highest weight item and the $k^{th}$ highest weight item for some $k$. The authors show that this additional piece of information suffices to design an algorithm that breaks the classical $1/e$ competitive ratio barrier (in the no-hints setting) by a constant factor. In particular, their algorithm achieves a competitive ratio of $\max(0.4, 0.5(k+1)^{-k^{-1}})$ when the exact gap between the weights of the best and $k^{th}$ best item are known exactly, along with the value of the index $k$, and a competitive ratio of $0.4$ when only the exact gap is known, but not the index $k$. In the case where the provided gap may be erroneous, they given an algorithm with fairly non-trivial robustness vs consistency tradeoffs. Lastly, in the case of bounded errors with a known error bound $\epsilon$, they give an algorithm that achieves a competitive ratio that is nearly $0.4$, with a small additive loss of $2\epsilon$. All algorithms are deterministic, and basically simple variants of the classical threshold based strategy that achieves the $1/e$ competitive ratio in the classical no-hints setting. Strengths: The paper is very well written and easy to follow. All proofs, to the best of my understanding, are correct, and the algorithms are very simple and intuitive. They are basically simple variants of the threshold based strategy for the classical no-hints setting. Weaknesses: While I find the theoretical contributions of this paper to be sound, I am not entirely certain about its novelty or impact to this space of online learning/ learning with hints. However, I must say that while I am quite familiar with research in the online learning space, as well as the secretary problem and its variants, I am not quite up to date with very recent research in this learning with hints space. Technical Quality: 4 Clarity: 4 Questions for Authors: What is the broader impact of this work to the online learning with hints literature? Does it have any implications beyond this particular problem studied in this paper? Are the algorithmic ideas or proof techniques more generally applicable to other online learning with hints problems? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: I do not have any major concerns about the negative impact of this work. My only concern is its broader applicability. See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Our main contributions are twofold: From a conceptual point of view, our work emphasizes and justifies to study weak prediction models. This question can of course also be asked for other online (learning) problems. For example, it is easy to see that the canonical $1/2$-tightness instance in Prophet Inequalities can easily be solved with additive gaps. Second, from a technical contribution, the underlying algorithmic ideas (e.g., incorporating a gap in the algorithm) can also be applied to other classes of online selection problems (in addition to the mentioned Prophet Inequalities also to Pandora's Box,... ). --- Rebuttal Comment 1.1: Comment: I dont quite understand. Are you saying that the same problem setting (i.e. additive gaps) can be extended to other online selection problems or can similar algorithmic ideas that are developed in this paper be extended to other online selection problems when given additive gaps? The latter is a much stronger claim and I dont immediately see why this is true. That being said, I agree with reviewer rRpH’s assessment that the fundamental problem setting of trying to understand what is the weakest piece of information needed to break classical lower bounds as a very interesting area of research. However, I will be keeping my score that is a weak accept, mostly because I am not entirely up to date with newer research in this area, so Im not fully certain how big a contribution this paper is (aside from introducing the general problem setting which i agree is very interesting) --- Reply to Comment 1.1.1: Comment: Thanks for the reply. The same problem setting can be considered in other (online) learning environments as well, for example in i.i.d. Prophet Inequalities. Now, the benchmark to beat is the optimal $0.745$-competitive algorithm, which is also threshold-based. Given an additive gap, one approach to improve upon the $0.745$ is to incorporate the gap in the threshold in the same way as we do in our algorithmic template. If this gap is large, it will help to exclude a lot of elements and thus give us a hint which elements are really worth accepting. If this gap is small, we know that the best and, say, second/third/fourth best value in the sequence are pretty close, so we can benefit from selecting any of these. This gives some (intuitive) evidence that not only the prediction model could be interesting in other problems, but also the underlying algorithmic framework.
Summary: This paper considers the value maximization version of the secretary problem with additional information of the gap between the best and second best values. The first main contribution of this paper is that this additional information makes it possible to design a $0.4$-competitive algorithm. The second main contribution is a robustness-consistency trade-off; if the predicted gap is correct, then the algorithm achieves better than $1/e$-competitiveness and otherwise, it achieves a constant-factor competitiveness. The authors also provide empirical results that show the effectiveness and error-robustness of the proposed algorithm. Strengths: While the existing studies on learning-augmented secretary problems assume predictions of the maximum value or all values, this paper's setting considers only a smaller piece of predictions. This paper provides an interesting new result of learning-augmented secretary algorithms. The main idea of the proof is simple and clean. This paper is well-organized and easy to follow. Weaknesses: - The main idea of the proof is elementary. I do not think it is trivial, but the proof technique itself is not so surprising. - Since the main difficulty of the secretary problem lies in uncertainty of the scale of values, it is not surprising that the additive gap improves the competitive ratio. - The result on bounded errors (Section 5) seems to be obtained just by replacing a predicted gap with its erroneous version in the main proof. - I do not know any realistic scenario in which the predicted additive gap is obtained but not the predicted maximum value. Technical Quality: 3 Clarity: 3 Questions for Authors: Is it possible to plot the robustness-consistency trade-off (x-axis: robustness, y-axis: consistency) obtained when you choose optimal $\tau$ and $\gamma$? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper has no ethical issue. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Please find a plot for the Pareto frontier of the robustness-consistency trade-off in the attached document. When being (close to) zero-robust, we obtain a consistency which is close to the mentioned $0.5736$ upper bound. On the other hand, we obtain guarantees of $\approx 1/e$ robustness and consistency as expected. Of course, we are happy to add/exchange the plot in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for attaching the plot. I think this plot is interesting and helpful if it is included in the updated version of the manuscript.
Rebuttal 1: Rebuttal: We would like to thank all four reviewers for their highly valuable feedback and appreciate the positive spirit concerning the comments and remarks. Concerning the question on guarantees as a function of an unknown error from reviewers rRpH and jGey: When underestimating the gap, Algorithm 1 has a smooth decay with respect to the error which is a direct consequence of our analysis. Still, trying to derive guarantees in general is hopeless when the error is unknown, as overestimating the gap indeed implies a huge drop in the competitive ratio (as we show in Example E.1 and in our simulations, see Figure 3 and its interpretation). As a consequence, we restrict ourselves to the 'sharp/binary' robustness-consistency trade-off and complement this with the results for a known bound on the error. Pdf: /pdf/968afeab0459a4b26652b4a0da71176eadf088cc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Zero-shot Image Editing with Reference Imitation
Accept (poster)
Summary: The paper proposes a method to modify specific parts of the image with the content from the reference image and consistently with the original content. It combines the technique of dual diffusion to replace the key and value features, which is used in MasaCtrl, of the source image with the reference image. The method is trained in a self-supervised manner, in which some parts of the image are masked, and the model is trained to recover them. Strengths: - The paper is well-written and easy to understand. - The end-to-end pipeline doesn’t need fine-tuning for each image. - The proposed method is friendly for users that only give the source image, reference image, and mask, and then the model automatically gets the correct content from the reference image and aligns for the original part that is specific by mask. - Experimental results are impressive with various categories and metrics. Weaknesses: - The proposed method is not actually zero-shot as it requires training. It requires expensive resources for the training model. - The method presented in this paper lacks novelty. It uses the old technique (i.e., replacing the key, value feature from MasaCtrl [MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing, ICCV 2023]) and combines with an old training strategy that recovers the region that is specific by mask. - The source and reference images must be the same scale as the object, and the source image must contain only one or a salient object. Technical Quality: 3 Clarity: 3 Questions for Authors: What if the source image is a complicated image that contains some objects and the desired modified object is not a salient object? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper is not so novel; it reuses components from existing techniques. The proposed method is not actually zero-shot as it requires training. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ``W1. The proposed method is not actually zero-shot as it requires training. It requires expensive resources for the training model.`` **This comment is erroneous**. “Zero-shot” is different from “training-free.” Zero-shot means that we train the model on some examples, and the model’s ability can generalize to unseen examples without case-by-case fine-tuning or optimization. Hence, our setting is exactly "zero-shot." Additionally, our model is not training-expensive. Our final model is trained on 8 GPUs, and we have verified that our results can be reproduced with 2 or 4 GPUs. Besides, we do not require any additional human annotations for training our model. ``W2. The novelty and techniques of our methods.`` We disagree. There exist misunderstandings about our motivation and novelty. First, the dual u-net structure similar to MasaCtrl is not our contribution, it is a common practice for injecting the features of reference images, which is widely used in recent papers [56, 17, 46, 6, 58, 45]. Besides, we tackle totally different tasks with MasaCtrl. Second, the core training strategy is not the MAE-like mask completion, but "correspondence learning" between two images. We fully utilize the consistency and variation of video frames, randomly mask one video frame, and use another "full video frame" for completion. Please refer to **A. Core contributions and novelty.** in the global rebuttal block to see our contribution and novelty. ``W3. The source and reference images must be the same scale as the object, and the source image must contain only one or a salient object.`` It is not true. First, we support source and reference objects of different scales. For example, in Fig. 5, the second row features Hinton and Tyson, and in Fig. A4 in the appendix, the first and second rows illustrate the scarf and ears. These examples all have quite different scales Additionally, as mentioned in L145-147, we add strong augmentations like resizing during training, making our model robust to scale variance. Second, the source image can contain multiple objects. As shown in the bottom-right example of Fig. 7, our model performs well in one pass. In this paper, we primarily illustrate images with a single salient object for clear visualization. However, our method can support complex source images, and we have added more examples in Fig. R1 in the rebuttal PDF, where the source images contain **complicated scenes and large scale-variations**. We will include these examples in the revision. --- Rebuttal 2: Comment: Dear Reviewer, We kindly remind you about our submitted rebuttal. We understand that your time is valuable, and we greatly appreciate the effort you put into reviewing our work. If you have any questions or require further clarification on any points, please don't hesitate to reach out. We look forward to your feedback and hope for a favorable consideration. --- Rebuttal 3: Comment: Dear Reviewer q758, As the deadline for the discussion phase approaches, we wanted to kindly remind you to read our rebuttal. We highly value your feedback and are eager to incorporate your suggestions to improve our work. If there are any aspects that require further clarification or additional information, we would be more than happy to engage in continued discussion and provide any necessary details. --- Rebuttal 4: Comment: Dear Authors, Thank you for your detailed rebuttal. First, I would like to acknowledge that I agree with your definition of the zero-shot method. Second, while I appreciate the detailed explanation of your contributions and their novelty, I believe that the novelty is primarily focused on user experiments related to image editing tasks. From your presentation in the paper, it seems that you are proposing a novel architecture (Imitative U-Net). However, I still maintain that the overall method may not be entirely novel. Finally, the examples provided in Fig. R1 do not fully convince me. Could you test a more complex example that includes editing smaller objects within the image? For instance, an image with a large number of overlapping objects or objects that do not interact with each other would be more compelling. In conclusion, I would like to keep my rating as 'BorderLine Reject.’ --- Rebuttal 5: Comment: Thank you for your response. I believe that the contribution and novelty are still misunderstood. `` it seems that you are proposing a novel architecture (Imitative U-Net)`` We claim multiple times that **the dual U-Net design is not our novelty/contribution, instead, it is a common practice**. We do not claim contribution. As mentioned in line 121, this structure is widely used in many tasks that require a reference image [56, 17, 46, 6, 58, 45]. These works verified the advantages of "dual U-Net" over previous works like IP-adapter or controlnet. We leverage this type of design solely as our feature injection module and do not claim it as our novelty. `` However, I still maintain that the overall method may not be entirely novel. `` We are the **only work** that could realize local region editing with a reference image ***even without providing the reference mask***, for this motivation, please refer to "B. Motivation of getting rid of the reference mask" in the global rebuttal. In addition, we are the **only work** that trains the model using one frame to complete another frame to learn the correspondence. As **the whole task and training strategy** are completely novel, we believe the novelty is enough. Again, we would like to reaffirm our core novelty below. **Novel form for image editing.** Unlike the commonly studied task of "object insertion", our proposed "imitative editing" offers a new editing form, which does not require users to specify the region of interest from the reference image. Such a design sufficiently simplifies the editing process, because users only need to brush once (i.e., the source image) instead of twice (i.e., both the source image and the reference image), and hence clearly distinguishes our approach from existing methods. It is also noteworthy that getting rid of the reference mask is essential for part-level editing, making our algorithm more flexible in practice. **Novel training strategy.** To accomplish "imitative editing", it requires the model to automatically analyze the semantic correspondence between the to-edit region of the source image and the "full" reference image. For such a purpose, our approach benefits more from the training strategy but not the data. Concretely, we propose to leverage the temporal consistency and variation across video frames, and urge the model to recover the masked region of one video frame by only using the information from another frame without specifying the corresponding region. To our knowledge, our proposed training pipeline offers a brand new way of using video data for diffusion-based editing. **Benchmark for the novel task**. We construct a diversified benchmark that contains multiple tracks and covers a large variety of scenarios. Besides, we come up with various metrics to thoroughly evaluate the compare different methods. Our benchmark could be beneficial for further explorations for imitative editing. ``Could you test a more complex example that includes editing smaller objects within the image?`` Please notice that, in the second row of Fig.R1 of the rebuttal PDF, we have added examples for editing "the candle", "the hat", and even "the socks of a full-body human". **We have already shown these small things that occupy less than 5% pixels of the source images**. We kindly remind you that, as the "to-edit" region could be specified by users with a "source mask". In this way, **there would not be any challenges** in distinguishing the "to-edit" region even if the environment is complicated with distracters or this region is small. According to the rebuttal policy, the PDF files could not be updated. We would contact ACs to see if we could give you more examples. **Please check if this response addresses your concerns, if you have further questions, we are happy to discuss them.** --- Rebuttal 6: Comment: Dear Reviewer, We have added examples for even smaller objects and given the link to ACs, you may receive this link from ACs later. Here we describe the examples: - A camera shot filled with large numbers of fruits, where some fruits are partially obscured by others. We edited one of the fruits that was partially obscured. - A complex rural scene with more than ten people and ten dogs or sheep, where there is significant occlusion between the people and animals. We replaced one of the heavily obscured dogs with a tiger. - A windowsill filled with plants (more than 10), with some pots partially obscuring others. We edited one of the pots according to the provided reference image. Please note that editing very small areas is only a corner case with limited practical usage in real-world applications and is unrelated to the main focus of our paper. However, we have still provided the requested samples. We hope these examples fully address your concerns. If you have any new questions, feel free to reach out to us at any time. --- Rebuttal 7: Comment: Dear Authors, Thank you for your further explanations. My concerns are mostly addressed, but it is still not so convincing. I can only increase my rating to Borderline Accept. I already know that the dual U-Net design is not your novelty/contribution. One thing I want to point out is that the way your contribution is presented in the paper confused me. I don't think your method is actually novel; it is only novel in the context of user experiments. p/s: I haven't received your examples of small objects. --- Rebuttal Comment 7.1: Comment: Dear Reviewer, Thank you for your acknowledgment. ``the way your contribution is presented in the paper confused me`` Thank you for pointing it out. To make our contribution clearer, we would add a small paragraph at the end of the introduction section to summarize our contributions and novelties. In addition, we would add an explanation paragraph in Section 3.2 to claim the dual U-Net is not the contribution of this paper in the revised version. `` It is only novel in the context of user experiments.`` First, image editing is an application-oriented task, and we formulate a more convenient editing form without the reference mask. We think "this novelty in user experience" is already quite important, and could make great contributions for the image editing community. Besides, this "novelty for users" could not be simply archived by applying existing techniques, it also requires the support of our novel training strategies. Our video-based training strategy is specifically designed for learning semantic correspondence without reference masks, which is important for realizing our "imitative editing". ``p/s: I haven't received your examples of small objects.`` We have already sent the link to ACs, maybe they have not read the message. As the discussion period is going to end we will not be able to make further responses. Please remember to ask ACs for the results if they forgot to send you the link. By the way, we believe that the examples in the second row of the rebuttal PDF like "candles" and "socks" are already small. Considering that the model receives a "source mask" to indicate the to-edit region, the difficulties for the model would be similar even for more complicated scenarios. In addition, we appreciate your constructive discussions and we would make our paper clearer according to your suggestions.
Summary: To achieve more precise imitative editing, the paper proposed a training framework, called MimicBrush. It randomly selects two frames from a video clip, masks some regions of one frame, and learn to recover the masked regions using the information from the other frame. Also, it constructs a benchmark, consisting two main tasks, i.e., part composition and texture transfer, to evaluate the performance comprehensively. Strengths: - The setting for part-level image editing is a novel idea. I believe it will be very helpful for designers to leverage imitative editing in their use cases. - The paper provides comprehensive experiments to prove its effectiveness. - The paper is well-written and easy-to-follow. Weaknesses: The definition of inter vs. inner is unclear. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How is rank calculated in table 2? 2. Regarding line #195, does it mean that when the randomly selected grid number is 4, it would drop 4x75%=3 for SIFT-matched features and 4x50%=2 for other regions? So, in total, it would drop 5 grids? 3. What does the gid ratio mean in line #261? 4. The paper structure can be improved. Sec. 4.4 and Sec. 4.2 contain the “Qualitative comparison”. I suggest having only one Qualitative comparison section and separating it into subsections: “comparing with others” and “diverse applications without comparing with others”. 5. The mask representation is hard to identify, e.g., the right case in the first row or the left case in the third row in Fig. 7. I suggest to change a clearer representation. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have shared the limitations and potential societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your acknowledgment and constructive suggestions. We will follow your recommendations to polish our paper for the revision. `` W1. The definition of inter vs. inner is unclear.`` **Inter-instance** refers to cases where the source and reference images are of different instances, such as compositing the ear of a "cat" onto the head of a "man". This track corresponds to the most interesting and fantastic applications. However, since there is no ground truth for the composited target (e.g., the given man with the given cat's ear), we evaluate this track using metrics like DINO/CLIP scores (lines 174-176). **Inner-instance** refers to cases where the source and reference images are of the same instance. This track is relevant to post-refinement applications. For example, we might use AnyDoor or DreamBooth to generate a new image of a given bag and find that the generated logos or textures differ from the reference image. Our method can use the reference image to repaint the flawed region, even if they are not in the same pose. In our benchmark, we use real image pairs of the same instance as the source and reference images, manually masking some regions of the source image. In this scenario, the unmasked source image itself serves as the ground truth, allowing us to calculate SSIM, PSNR, and LPIPS. We will make it clearer in the revision. ``Q1. How is rank calculated in table 2?`` We directly average the rank numbers for all test examples, which allows us to compare different methods effectively. ``Q2. Regarding line #195, does it mean that when the randomly selected grid number is 4, it would drop 4x75%=3 for SIFT-matched features and 4x50%=2 for other regions? So, in total, it would drop 5 grids?`` As mentioned in L195, “we randomly choose the grid number N×N from 3 to 10”, thus if we select grid number is 4, we have 4 x 4 = 16 patches. We first split them into SIFT-matched patches (e.g., 4 patches) and no-matched patches (e.g., 12 patches). Then, we drop 4 * 0.75 = 3 matched ones and 12 * 0.5 = 6 non-matched ones. We will make it clearer, thank you for your suggestions. ``Q3. What does the gid ratio mean in line #261?`` The grid ratio here refers to the ratios of masked grid patches, which correspond to the results in Tab.4. We report the results for masking 25%, 50%, and 75% of grid patches. We will clarify this in the text. Thank you for your suggestions. ``Q4 & Q5. The paper structure and mask representation.`` Thank you for your suggestions. We will reorganize the paragraphs and use more effective ways to present the masks, such as using different colors or providing additional binary masks. --- Rebuttal Comment 1.1: Comment: Thank the author for providing further information and contributing great work to the image editing community! I have read other reviewers' comments and the rebuttal information. I would like to remain my rating as 'Accept.’ --- Reply to Comment 1.1.1: Comment: Thank you very much for your recognition of our work. We greatly value the feedback you provided, and we will follow the discussion to make this paper clearer and better.
Summary: The paper introduces a novel approach called imitative editing aimed at enhancing user creativity in image editing tasks. Traditional image editing often involves matching references to the source image, which can be challenging. In contrast, imitative editing allows users to directly draw inspiration from in-the-wild references without needing to precisely align them with the source image. This paper introduced a new approach called MimicBrush, a generative training framework that leverages video frame pairs to learn semantic correspondence and recover masked regions in images. Strengths: --This paper proposed a novel task, Imitative editing, which is a new editing scenario that enables users to leverage diverse reference images without the need for exact matches, promoting more intuitive and less restrictive editing interactions. --The training strategy is self-supervised by utilizing two frames coming from the same video is intuitive. To ensure effective training, the paper further introduced data selection that select pairs without too big or too small variations, data augmentation to increase the task difficulty, and masking strategy to mask patches similar to reference in order to force the model to take guidance from reference for reconstruction. --This paper proposed a test benchmark containing part composition and texture transfer to evaluate the performance of imitative editing, which is good. Weaknesses: --Regarding part composition and texture transfer, if I understand correctly, the proposed framework appears capable of simultaneously addressing both tasks. However, based on the training procedure, I am curious about how the model distinguishes between referencing parts and textures. This differentiation does not seem explicitly addressed in the training process, potentially leading to the model misinterpreting user intentions during the inference stage. --Regarding ablation experiment, this paper present the training strategies about training data, augmentation and masking method. It would be better to also ablate the necessity of image training data. And for all the ablations, it would better to have qualitative comparisons to illustrate how such training strategy affect the visual quality. Technical Quality: 3 Clarity: 3 Questions for Authors: please see above weaknesses. Overall I think this paper is trying to address a new and interesting editing scenario, which will make image editing more convenient. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your acknowledgment and constructive suggestions, we will follow you suggestions to polish our paper for the revision. ``How the model distinguishes between referencing parts and textures.`` During inference, users have the option to enable "depth control." The depth map serves as a strong condition for maintaining the shape of the original content. When enabled, the model preserves the original shape, thus forced to conduct texture transfer. Without the depth map, the model has no constraints to retain the original shape, allowing it to change the shape thus performing part composition. During training, we randomly drop the depth map to ensure the model is compatible with both tasks. ``Ablate the necessity of image training data`` Thank you for your constructive suggestions. We have found that image data is also important for our methods. Please refer to **C. Importance of SAM Data** in the global rebuttal block for details. ``Qualitative comparisons for the ablation study`` We agree that qualitative comparisons can help readers better understand our methods. In fact, we have already included qualitative ablations for different training strategies in Fig.A2 of the appendix. We will add more examples in the revision. We also add visualized ablation studies for the training data in the rebuttal PDF Fig.R3. --- Rebuttal 2: Comment: Dear Reviewer, We kindly remind you about our submitted rebuttal. We understand that your time is valuable, and we greatly appreciate the effort you put into reviewing our work. If you have any questions or require further clarification on any points, please don't hesitate to reach out. We look forward to your feedback and hope for a favorable consideration. --- Rebuttal Comment 2.1: Comment: Thanks for the detailed response. The rebuttal addressed my concerns.
Summary: The paper proposes a new form of image editing, termed imitative editing. In this scenario, the user can edit the local area of the source image with a similar area to the reference image. This requires the system to automatically figure out what to expect from the reference to perform the editing. To achieve this goal, the author first designed a new generative network, dubbed MimicBrush. Then, the paper also constructs the benchmark from the video clip. The paper demonstrates both qualitative and quantitative results to validate the effectiveness. Strengths: 1. The proposed imitative editing is reasonable, novel, and inspiring. As explained in the paper, such an editing form can facilitate other real-world applications in fashion and product design. 2. The paper contributes a valuable benchmark for the proposed task, including diverse topics and examples, which can benefit later research. 3. The demonstrated examples show desired and good qualitative and quantitative editing results. Weaknesses: # Motivation & Novelty **Unclear motivation and advantages**: The core learning algorithm of the paper is an imitative U-Net, a reference U-Net, and a depth model. However, the advantages and motivations of such designs are not clearly discussed and validated. For example, some other networks such as Controlnet and IP-adapter, can provide conditional information from the reference image. Why is injecting features from the imitative U-Net better on the proposed imitative editing task? **Unclear novelties in terms of learning compared with existing methods**: The proposed imitative task is new in that the system can edit part of the object whereas the latest methods mainly insert full objects. However, the novelty and difference seem to only lie in the training data (full object v.s. part object). The learning paradigm seems to be the same. Can these existing works do well in the proposed imitative editing task if trained with the collected part-level data? The essential difference **in terms of learning and formulation** between part-level editing and full-object editing is not addressed. My understanding is that they are essentially the same learning paradigm but in different concrete forms. # Formulation & Writing **Unclear notations**: 1. The paper proposes a new task and a new framework. However, the problem and algorithm are not rigorously formulated. For example, what are the input, output, and masks **mathematically**? 2. What is the training objective (loss function) in the pipeline? What is the loss used in the depth model? 3. For concatenation and injection of K, V from the reference U-Net, which layers are injected? Are cross-attention and self-attention processed in the same way? # Experiments & Evidence **Limited editing types**: The demonstrated editing types are limited. 1. For example, for the same source image, can the user edit different parts of the object with different reference images? 2. Does the proposed method support general image editing such as changing objects (cat to dog), or shape (round cake to square cake)? **Unclear motivation on the SAM data**: The paper mentions using the SAM data in training except for the Video clips. How does the SAM data help the learning in this task? I think after the strong augmentation, the spatial position of different objects does not change. So, how does this help to learn the correspondence between the source image and reference image? Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness part. My main concerns are about the depth of discussion and motivation of the proposed network and task, and the writing and formulation that are not serious enough to reach the criterion of Neurips. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author already addresses the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Motivation & Novelty** We disagree. There exist misunderstandings about our motivation and novelty. The "structure of dual U-Net" and the "data difference" are not our contributions. Please refer to **A. Core contributions and novelty** in the global rebuttal to see our motivation. ``1. Motivation for the dual U-Nets.`` We explain the issues of Reference U-Net: - **First, Reference U-Net is a common practice, we do not claim contribution.** As mentioned in line 121, Reference U-Net is widely used in many tasks that require a reference image [56, 17, 46, 6, 58, 45]. These works have already proved the advantages of this structure over previous works like IP-adapter or controlnet. We use Reference U-Net solely as our feature injection module and do not claim it as our novelty, so we provide citations and do not focus on this part to discuss. - **Second, we have compared different strategies in Tab.3 and Fig.6.** "CLIP" represents the strategy of IP-Adapter, and "DINOv2" represents the strategy of AnyDoor. We observed that the Reference U-Net preserves details better, as it provides feature maps with higher resolution. ``2. Novelties in terms of learning.`` The core difference is not the data, but the "mask-free" task formulation and the training strategy for "correspondence learning". Please refer to **B. Motivation of getting rid of the reference mask** in the global rebuttal block to see the differences between the previous object-level insertion methods. --- **Formulation & Writing** ``1. The input, output, and masks mathematically`` Our task formulation is clearly demonstrated in Fig. 1, 2, 5, 6, 7. The input and outputs are straightforward: the model takes a source image $I_{source}^{h \times w \times 3}$, a binary source mask (indicating the region to edit) $M_{source}^{h \times w \times 1}$ , and a reference image $I_{ref}^{h \times w \times 3}$, and then predicts the edited image after iterative denoising. The input and output could be formulated as follows: $ I_{result}^{h \times w \times 3} = f(I_{source}^{h \times w \times 3}, I_{ref}^{h \times w \times 3}, M_{source}^{h \times w \times 1} ) $ ``2. What is the training objective in the pipeline? What is the loss used in the depth model?`` We use the default loss function of diffusion models, which predicts the added noise at each step. The depth model is frozen during training (without loss). We did not emphasize these aspects as we did not make any modifications from the stable diffusion baseline. As suggested, we include some basic formulations of diffusion models. First, the ground truth target image( unmasked source image, with augmentation ) and the reference image, are encoded into latent space via VAE encoders. We note the target latent and reference latent as $\mathbf{z}^{tar}_0$, and $\mathbf{z}^{ref}_0$ at time step 0. During training, we randomly sample a timestep $t$ and add Gaussian noises $ \epsilon $ on the target latent: $ \mathbf{z}^{tar}_t = \sqrt{\bar{\alpha}_t} \mathbf{z}^{tar}_0 + \sqrt{1 - \bar{\alpha}_t} \boldsymbol{\epsilon}\ $ We note the U-Nets as $ \boldsymbol{\epsilon}_{w} $, which takes the target latent(imitative U-Net), the reference latent(Reference U-Net), the CLIP image embedding of the reference image(noted as $c^{ref}$), and timestep $t$ as inputs to predict the added noise with MSE loss: $ \mathbf{L}_t = \left| \boldsymbol{\epsilon}_w(\mathbf{z}^{tar}_t, \mathbf{z}_0^{ref}, c^{ref}, t) - \epsilon \right|^2_2\ $ We will include these details in the revised version. ``3. For concatenation and injection of K, V from the reference U-Net, which layers are injected? Are cross-attention and self-attention processed in the same way?`` As discussed in "Motivation and Novelty," reference U-Net is a common practice [56, 17, 46, 6, 58, 45]. We do not claim contribution so we provide citations for the implementation details. Our implementation follows previous work[45] that injects K,V into self-attention layers of unet decoder layers. We will add these details in the revision. --- **Experiments & Evidence** ``1. For the same source image, can the user edit different parts of the object with different reference images?`` Yes, there are two ways to achieve this: 1) concatenate multiple reference images like Fig.7 (the bottom-down example), where we concatenate the two reference images and mark multiple regions to be edited. The model can handle this case in a single pass. 2) edit different parts progressively in multiple turns, like Fig. A4 in the appendix (the first example of Hinton), we show how "cat ears," "scarf," and "glasses" are added from different reference images to the source image. **We also add more examples as required in Fig.R1 in the rebuttal PDF file.** ``2. Does the proposed method support general image editing such as changing objects (cat to dog), or shape (round cake to square cake)?`` Image editing is a broad topic, and our goal is not to tackle all editing tasks. In this paper, we propose a novel form of imitative editing that encompasses "part composition," "texture transfer," and "post-refinement" for general categories. This already represents a wide scope, and no previous methods have achieved this combination. Examples for changing objects are added in Fig.R2(first row) in the rebuttal PDF file. Besides, although we could not support all the editing tasks, we could act as their post-refinement to enhance the performance. As in the last row of Fig.7, we could refine the artifacts of AnyDoor(which conducts object changing) and Cones-2 (which could change the shape and attributes of customized objects). More examples of post-refinement are given in Fig.R2 in the rebuttal PDF. Our method could enhance the performance for a wide range of editing tasks only if they require a reference image. ``How does the SAM data help the learning in this task?`` Please refer to **C. Importance of SAM Data** in the global rebuttal block. --- Rebuttal 2: Comment: Dear Reviewer, We would like to gently remind you of our submitted rebuttal. We understand the demands on your time and sincerely appreciate the effort you invest in reviewing our work. If there are any questions or if you need further clarification on any points, please feel free to reach out. We look forward to your feedback and hope for a positive outcome. Thank you for your time and consideration. --- Rebuttal 3: Comment: Dear Reviewer iASj, As the deadline for the discussion phase approaches, we wanted to kindly remind you to read our rebuttal. We highly value your feedback and are eager to incorporate your suggestions to improve our work. If there are any aspects that require further clarification or additional information, we would be more than happy to engage in continued discussion and provide any necessary details. --- Rebuttal Comment 3.1: Title: Response to the rebuttal Comment: Dear authors, I have read the rebuttal and other reviewer's comments. Thanks for the efforts in the experiments and clarification! My comments mainly proceed from the novelties and contributions in learning, i.e., theoretical understanding and derivation, OR the validated design of approaches and insights. Thus, the main components (Dual-Unet, KV replacement) are existing techniques (as also stated in RWq758). I acknowledged the authors' insights on "the Reference U-Net preserves details better, as it provides feature maps with higher resolution" in rebuttal. However, this argument is generally beneficial for all image tasks and does not sufficiently address how it facilitates the proposed task. So, the paper provides me with an impression of using existing (or sort of new) techniques and strategies with new data to build an application. Since this is not a theory paper, I think the novelties in the design of approaches can be limited though I acknowledged the interesting editing effects added in the rebuttal. Besides, considering the overall formulation of the current version of the paper, I choose to maintain my score. --- Rebuttal 4: Comment: Dear Reviewer, Thank you for your response. However, we **disagree** with your comments, and **it seems that our rebuttal is ignored**. This comment does not involve the new information that we claimed in the rebutall. ``I acknowledged the authors' insights on "the Reference U-Net preserves details better, as it provides feature maps with higher resolution" in rebuttal. However, this argument is generally beneficial for all image tasks and does not sufficiently address how it facilitates the proposed task.`` We have responded in the rebuttal that: "We have compared different strategies in Tab.3 and Fig.6. "CLIP" represents the strategy of IP-Adapter, and "DINOv2" represents the strategy of AnyDoor. We observed that the Reference U-Net preserves details better, as it provides feature maps with higher resolution." In this case, we conducted detailed ablation studies to explore the structure and gave both qualitative and quantitative analyses in the paper. Considering this structure is **not our contribution**, we believe this experiment is already sufficient, and the advantage of this structure is exactly verified "in the proposed task". However, our claims around this experiment seem to be ignored, so we disagree with this comment. `` So, the paper provides me with an impression of using existing (or sort of new) techniques and strategies with new data to build an application`` We have repeatedly claimed in the rebuttal that **the dual U-Net is not our contribution or novelty** and carefully explained **the major differences are beyond the data**. However, these explanations seem to be ignored, and the new comments still focus on **the novelty of the Reference U-Net and training data**. It seems that the misunderstanding still strongly exists. To make our contribution clearer, we would add a small paragraph at the end of the introduction section to summarize our contributions and novelties. In addition, we would add an explanation paragraph in Section 3.2 to claim the dual U-Net is not the contribution of this paper in the revised version. We appreciate your comments and please carefully read our responses and the initial rebuttal to see if these concerns have been addressed. We would be happy to get your suggestions to polish our paper. Thank you.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable suggestions. We emphasize the motivation and novelty in the global rebuttal and respond to each reviewer individually. Additionally, we include more figures and tables in the attached PDF file. --- ### A. Core contributions and novelty. We would like to reaffirm our core contributions below. - **Novel form for image editing.** Unlike the commonly studied task of "object insertion", our proposed "imitative editing" offers a new editing form, which does *not* require users to specify the region of interest from the reference image. Such a design **sufficiently simplifies the editing process**, because users only need to brush once (*i.e.*, the source image) instead of twice (*i.e.*, both the source image and the reference image), and hence clearly distinguishes our approach from existing methods. It is also noteworthy that getting rid of the reference mask is essential for *part-level editing*, making our algorithm more **flexible** in practice. - **Novel training strategy.** To accomplish "imitative editing", it requires the model to **automatically** analyze the semantic correspondence between the to-edit region of the source image and the "full" reference image. For such a purpose, our approach benefits more from the training strategy but not the data. Concretely, we propose to leverage the temporal consistency and variation across video frames, and urge the model to recover the masked region of one video frame by *only* using the information from another frame *without* specifying the corresponding region. To our knowledge, our proposed training pipeline offers a brand new way of using video data for diffusion-based editing. - **Benchmark for the novel task.** We construct a diversified benchmark that contains multiple tracks and covers a large variety of scenarios. Besides, we come up with various metrics to thoroughly evaluate the compare different methods. Our benchmark could be beneficial for further explorations for imitative editing. --- ### B. Motivation of getting rid of the reference mask. As discussed above, this work presents a novel editing form, which allows users to *not* specify the region of interest from the reference image. Here, we would like to highlight **the necessity and the practical significance** of such a design. - It is challenging for users to segment some local regions like tattoos or thin necklaces, even with intelligent tools. - Local regions like tattoos are inherently intertwined with the context and difficult to understand when isolated. - During training, methods like AnyDoor require image pairs with the same instance in two images with masks. While it's easy to collect object-level pairs through tracking or matching, gathering high-quality mask pairs for general local parts is of high cost. By removing the reference mask, we can independently mask one video frame and use another frame (without the corresponding mask) to train the model. --- ### C. Importance of SAM Data SAM data contains the masks, we randomly choose a mask and then make two different crop (with different box sizes and locations) around the mask to make an image pair. Besides, we add strong augmentations to guarantee the variations. In this case, the masked regions are contained in both the source and reference images, but in different locations and sizes. We find the SAM data important for the training procedure in the following aspects: - Increased Diversity: Image data helps expand the diversity of categories and scenarios. It is challenging to collect category-balanced, high-quality videos at scale, but it is relatively easy to use open-source image datasets like SAM. - Mask Compatibility: Image datasets like SAM contain large-scale masks. Using these real masks makes the model more compatible with user-drawn masks of arbitrary shapes. **We add qualitative ablation results in the rebuttal PDF (Fig. R3) and quantitative ablations in the rebuttal PDF (Tab. R1).** The results show that SAM data and video data are both important. Pdf: /pdf/db3be8155b3d1ee30bc41db029405d4b772a2932.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LLM-Check: Investigating Detection of Hallucinations in Large Language Models
Accept (poster)
Summary: The paper addresses the issue of hallucinations in large language models (LLMs). They propose using special scores based on internal model states such as attention maps, hidden activations, and output prediction probabilities to identify hallucinations within a single response in both white-box and black-box settings. Additionally, they explore scenarios where ground-truth references are available, such as in Retrieval-Augmented Generation (RAG). The proposed methods demonstrate significant improvements in detection performance while being computationally efficient. Strengths: 1. The details for reproducibility are included as well as the code. 2. The paper covers a wide range of scenarios for hallucination detection, including settings with and without external references and varying levels of access to the model (white-box vs. black-box). 3. The paper includes extensive empirical evaluations using different datasets Weaknesses: 1. Lack of theoretical background. The paper uses scores from other papers with different setups, while not explaining why this scores should work. For example, it is not clear why EigenScore method from INSIDE [1], specifically designed to measure self-consistency across different generations, will work on hidden activations and attention maps. Authors eigenscore on attention maps are just some modification of the trace of the matrix, but maybe the original trace is better? 2. Some important comparisons are missing. Authors compare with other techniques, but they don't provide any comparisons with standard usage of hidden activations in previous works for hallucination detection i.e [2, 3]. 3. Domain robustness is not assessed. Score-based and classifier-based models for hallucination detection may not be robust for different text domains, but the paper misses this part. [1] C. Chen, K. Liu, Z. Chen, Y. Gu, Y. Wu, M. Tao, Z. Fu, and J. Ye. INSIDE: LLMs’ internal states retain the power of hallucination detection. In The Twelfth International Conference on Learning Representations, 2024 [2] Amos Azaria, & Tom Mitchell (2023). The Internal State of an LLM Knows When It's Lying. In The 2023 Conference on Empirical Methods in Natural Language Processing. [3] Kenneth Li, Oam Patel, Fernanda Viegas, Hanspeter Pfister, & Martin Wattenberg (2023). Inference-Time Intervention: Eliciting Truthful Answers from a Language Model. In Thirty-seventh Conference on Neural Information Processing Systems. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How theoretically could be explained the use of EigenScore in this setup (not for self-consistency across generations)? 2. Could you provide comparisons with standard classifiers on hidden representations (if possible)? 3. Could you please explain more carefully about MLP contributions in lines 177-178? Some minor typos: Line 90 typo “an centered” Line 138 ambiguous statement “in the response x, in the context of xp” Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: 1. Mitigation of hallucination itself is not addressed, which is justified 2. The proposed methods are applicable only to response-level detection 3. Authors admit that using their model for calculating scores could be infeasible in real-world scenarios Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We are encouraged that the reviewer appreciates the efficacy of the proposed method extensively demonstrated across diverse detection settings and datasets. We respond to the questions raised below: **Theoretical insights and comparisons with INSIDE** - In this work, we study the highly challenging task of hallucination detection within a single example - of the form $(x_p \oplus x)$ for a prompt $x_p$ and corresponding model response $x$ - without any training or inference time overheads. This sharply contrasts with INSIDE which performs population-level detection by computing the centered covariance matrix across multiple model responses to check self-consistency of the set so generated. - Indeed, though the mean-log-determinant formula appears similar to that of INSIDE, in LLM-Check the interaction in latent space between the different token representations **within the same sample response** is utilized, as we posit that the LLM is in fact sensitive to the presence of non-factual, hallucinated information in responses which would otherwise be truthful. This arises from the probabilistic language-modeling learnt by the LLM, based on factual precepts encountered in its training and that this can then be efficiently leveraged to perform hallucination detection within a single model response itself. - Theoretically, this is well-motivated as the eigenvalues and singular values capture the changes in hidden interactions and patterns to the extent of attention applied, which we know is different in hallucinated samples which contain non-truths compared to non-hallucinated sample sequences. We also remark that the Attention score can be found without explicitly computing the SVD or eigen-decompositions since it is lower-triangular, enabling speedups between 44x and 450x compared to existing baselines, while also simultaneously achieving significantly improved detection performance. - Furthermore, the attention values represent softmax activations of the given attention head, and thus represent probability values arising from the cross-variation of the Keys and Values in the Attention Mechanism. Thus, the log-determinant correctly reduces to the trace of the log-values and characterizes the joint distribution of the predicted probabilities in the attention head, while the trace of the values themselves represents the **sum** of distinct probability values which is not theoretically sound, as these are not upper-bounded by 1. **Single-Response Analysis:** As noted by the reviewer, in the work we do perform hallucination detection within a single prompt-output response. However, we strongly believe that this is an *inherent advantage and not a limitation*, as the single-response analysis can be extended to population-level analysis by statistically aggregating scores across multiple output responses. We crucially note that this is strictly unidirectional - in that population-level detection methods such as INSIDE cannot be easily applied for single-response analysis. Moreover, we believe that the single-response analysis is more relevant for real-time detection of hallucinations in practical real-world use-cases. **Robustness across Domains:** In the paper, we do attempt to cover some extent of diversity in domains between the FAVA-annotation dataset, FAVA-train split, SelfCheckGPT Dataset and RAGTruth dataset. Indeed, the FAVA Annotation dataset itself spans four different data sources: Knowledge-intensive queries sampled from the Open Assistant dataset (Kopf et al. , 2023), Open QA prompts from the No Robots dataset (Rajani et al., 2023), Instruction-following and WebNLG (Gardent et al., 2017) datasets. In addition, the FAVA-train split consists of Wikipedia articles and Question Answering datasets (Kwiatkowski et al., 2019), while SelfCheckGPT Dataset consists of Annotations from the WikiBio dataset (Lebret et al., 2016). Lastly, the RAGTruth dataset consists of annotations from the CNN/Daily Mail dataset (See et al., 2017). We do however agree that this could be further expanded to more domains like medical datasets, but the annotation process specifically for hallucinations can be very expensive in such cases since it often requires a great deal of expertise to peruse and annotate carefully. However, we do believe that if specific domains such as a medical-data is considered, wherein a LLM specifically fine-tuned on such data is used, hallucination detection techniques such as LLM-Check will continue to be highly effective. **Real-world Feasibility:** We wish to clarify that we do indeed believe that the proposed method is feasible in real-world scenarios, and relies upon the very mild assumption that at least one open-source LLM such as Llama is available to serve as a proxy to compute our detection scores. While this assumption may not always strictly hold in all scenarios, we do believe that this covers a very large fraction of practical use cases encountered in the real-world. Furthermore, the effective black-box performance indicates a considerable real-world advantage over methods such as INSIDE which is strictly white-box and requires multiple output responses at inference time. --- Rebuttal 2: Title: Rebuttal Continued Comment: *Comparison with standard classifiers on hidden representations:* We thank the reviewer for the suggestion. We first note that methods that rely upon supervised training of classifiers on internal model representations over samples with and without hallucinations, can add large computational overheads. For instance, Inference-Time Intervention (ITI) [2] relies upon training 1024 binary classifiers on the TruthQA datasets, and thus becomes prohibitively expensive as also noted by Chen et al. in the INSIDE paper. - However, as suggested by the reviewer, we experimented with standard classifiers on hidden representations on the FAVA Annotation dataset. Namely, we create a train-test split of the dataset, and train classifiers on the hidden representations corresponding to layer 20 and layer 30 of a Llama-2-7B model. We use this split to ensure that we have about 100 testing samples balanced between classes (with and without hallucinations) to obtain reliable evaluations of the classifiers so trained. - We observe that using layer 20, this method obtains AUROC, Accuracy and TPR@5%FPR of 55.76, 59.82 and 0.00 respectively, while using layer 30 we observe detection rates with AUROC, Accuracy and TPR@5%FPR of 56.52, 61.61 and 1.79 respectively. We note that despite requiring supervised training, this method performs significantly worse than the proposed Attention Score which achieves AUROC, Accuracy and TPR@5%FPR of 72.34, 67.96 and 14.97 respectively. - We hypothesize that the generalization of these classifiers might not be adequate to achieve reliable detection performance across the diverse hallucination samples and types as present in the FAVA dataset as the corresponding hidden representations become similarly disparate. *MLP Contributions in Transformer Block:* The MLP contribution is the standard form in the transformer block at layer $ l $ is given by $M_l$ = $W^1_l $ $\rho ($ $W^2_l ($ $A_l$ + $H_{l-1}))$, as the residual block incorporates both the attention kernel of the same layer and hidden representation of the previous layer, as introduced by Vaswani et al. in the original transformers paper. We note that since the MLP is applied to each position separately and identically, it does not capture cross-token interactions, and thus we do not consider this component for detection itself. *Additional Comments:* We also thank the reviewer for pointing out the typo in line 90, we will correct this error in the final version of the paper. In Line 137-138, “to determine the presence or absence of hallucination in the response $x$, in the context of $x_p$” refers to the detection of hallucinations in the output response $x$ when the input prompt is $x_p$. We thank the reviewer for the detailed suggestions and constructive comments. We kindly ask if the reviewer would consider increasing their score if their concerns or questions have been addressed. We would be glad to engage further during the discussion period. --- Rebuttal 3: Comment: Thank you for the detailed and comprehensive answer. I could agree with some remarks, especially about Single-Response Analysis as an advantage. Also, I would like to highlight the comparison provided with standard classifier looks convincing. However, I would also point out that some of remarks are not properly addressed possibly due to misunderstanding. I will clarify my concerns: **Theoretical motivation**: In INSIDE this method was well motivated, exactly because EigenScore measured the interactions between different samples. The method works because similar samples would give co-aligned eigenvectors and low EigenScore. In this work, you measure the same score but for different tokens' embeddings, and I believe it is not evident why should they be more aligned in non-hallucinatory content. I think the method is efficient, but I miss the motivation or explanation, that would definitely strengthen your study. As a suggestion, you could try to inspect eigenvectors and eigenvalues for special examples of hallucinatory and non-hallucinatory content. Moreover, this metric seems to be related to linear intrinsic dimensionality of embeddings. The intrinsic dimensionality of embeddings was used, for example, in [1] or [2]. **Robustness across Domain**: when commenting about robustness to domain, I mean that if scores are robust to domain, the distribution of scores should be comparable for Wikipedia articles and Question Answering datasets from FAVA. Could you provide some results proving that scores are consistent across different distributions of texts? I will increase my score, if you could discuss my concerns and provide some results. [1] Tulchinskii, Eduard, et al. "Intrinsic dimension estimation for robust detection of ai-generated texts." Advances in Neural Information Processing Systems 36 (2024). [2] Yin, Fan, Jayanth Srinivasa, and Kai-Wei Chang. "Characterizing truthfulness in large language model generations with local intrinsic dimension." arXiv preprint arXiv:2402.18048 (2024). --- Rebuttal Comment 3.1: Title: Rebuttal Discussion (1) Comment: **Motivating the Proposed Method:** We sincerely thank the reviewer for the clarifications and suggestions. To inspect the variation across hallucinatory and non-hallucinatory examples, we utilize the FAVA Train split dataset, since it consists of pairs of samples which do not contain or do contain hallucinations for the same exact prompt. We make use of the following illustrative example, which post-tokenization contains the same total number of tokens for both the hallucinated and truthful responses. This helps simplify the comparison since the length-normalization is not an additional factor towards differentiating the scores. We first list the hallucinated response (HR) and the Truthful response (TR): Hallucinated Response (HR): *"The Song of Big Al" is a special episode of the nature documentary series "Walking with Dinosaurs" that focuses on the life story of an Tyrannosaurus specimen called "Big Al". The story is based on a well-preserved fossil of Big Al, which lived during the Early Jurassic period approximately 145 million years ago. The episode was produced by the BBC Natural History Unit and partnered with the National Geographic Channel, ProSieben, and TV Asahi. Rumor has it that the episode was partially shot in Cresswell Craggs, UK. Additionally, a behind-the-scenes episode called "Big Al Uncovered" was aired alongside "The Song of Big Al"* Truthful Response (TR): *"The Ballad of Big Al" is a special episode of the nature documentary series "Walking with Dinosaurs" that focuses on the life story of an Allosaurus specimen called "Big Al". The story is based on a well-preserved fossil of Big Al, which lived during the Late Jurassic period approximately 145 million years ago. The episode was produced by the BBC Natural History Unit and partnered with the Discovery Channel, ProSieben, and TV Asahi. Rumor has it that the episode was partially shot in Cresswell Craggs, UK. Additionally, a behind-the-scenes episode called "Big Al Uncovered" was aired alongside "The Ballad of Big Al"* We observe that the overall response structure is quite similar, but key phrases such as *"The Song of Big Al"* is hallucinated as *"The Ballad of Big Al"*, *"the Discovery Channel"* is hallucinated as *"the National Geographic Channel"*, and *"Tyrannosaurus specimen called "Big Al"* is hallucinated as *"Allosaurus specimen called "Big Al"*. We then analyze the eigenvalues of the attention kernel using a Llama-2-7b model, which are used to compute the proposed Attention Score. We know from standard results that eigenvectors with distinct eigenvalues are orthogonal, and thus we focus on analyzing the eigenvalues directly, rather than the high-dimensional eigenvectors to illustrate and motivate the proposed method. We highlight the key differentiating tokens along with the log-eigenvalue corresponding to that specific token position in the attention mechanism at an intermediate layer which contributes to the total mean-log-determinant in the proposed Attention Score: HR: The , -4.9932 | Song , -4.9800 | of , -5.5682 | Big , -5.8822 | Al , -5.6902 ||| Total / Mean Contribution: -27.1 / -5.42 TR: The , -4.9932 | Ball , -5.6826 | ad , -5.5707 | of , -6.7280 | Big , -6.2268 | Al , -5.9291 ||| Total / Mean Contribution: -35.1 / -5.85 Similarly, for the second subsequence with hallucinated context, we observe: HR: the , -7.4086 | National , -5.6150 | Geographic , -4.4685 | Channel , -5.8428 ||| Total / Mean Contribution: -23.3 / -5.83 TR: the , -7.4544 | Disc , -6.5701 | overy , -5.8899 | Channel , -6.7091 ||| Total / Mean Contribution: -26.6 / -6.6 We thus observe that the log-eigenvalues of the Hallucinated response are indeed larger in value, indicating that the rich latent space representations of the LLM are indeed indicative of the presence of hallucinated text. In the following subsequence, we see that though the hallucinated details are split across more tokens than that in the truthful response, we observe that the LLM is sensitive to the error as the log-eigenvalues corresponding to the tokens that immediately follow the error are larger, again contributing to the overall detection that the overall response is indeed hallucinated. HR: Ty , -5.4120 | ran , -5.5274 | n , -6.6008 | osa , -5.2792 | urus , -5.0483 | spec , -5.6330 | imen , -5.1434 | called , -6.0233 | " , -5.8518 | Big , -6.2984 | Al , -5.4460 | ". , -4.8101 | The , -6.0079 ||| Total / Mean Contribution: -73.0 / -5.62 TR: All , -5.5171 | osa , -5.3589 | urus , -5.3867 | spec , -6.1728 | imen , -6.0641 | called , -6.4599 | " , -6.3119 | Big , -6.3476 | Al , -5.8621 | ". , -5.9292 | The , -6.3206 | story , -5.0591 ||| Total / Mean Contribution: -76.8 / -5.91 --- Reply to Comment 3.1.1: Title: Rebuttal Discussion (2) Comment: *Motivation continued:* Indeed, over the entire token sequence, the difference between the cumulative sum of log-eigenvalues from the first token till the $i^{th}$ token (as $i$ varies from 1 to total_length) between the hallucinated and truthful response is as presented below. A larger positive value indicates that the hallucinated response is well separated from the truthful response, over the token sequence: 0.0, 0.0, 0.7, 0.7, 1.6, 2.1, 2.5, 2.7, 3.2, 4.1, 4.6, 3.3, 4.1, 5.4, 5.1, 4.8, 5.3, 6.2, 5.9, 5.8, 6.4, 6.3, 6.1, 6.2, 5.8, 6.2, 6.3, 6.0, 7.6, 6.9, 6.7, 6.9, 7.2, 6.2, 6.7, 7.6, 7.6, 6.3, 6.4, 7.6, 8.0, 9.3, 9.6, 10.1, 9.7, 10.1, 11.7, 10.7, 11.9, 12.0, 12.8, 13.5, 13.7, 14.4, 13.9, 14.2, 13.4, 13.7, 15.3, 14.7, 14.7, 15.5, 16.0, 15.1, 15.8, 15.9, 15.9, 17.5, 17.7, 17.5, 17.9, 17.7, 18.9, 20.1, 20.2, 20.5, 20.3, 20.7, 21.9, 21.3, 22.6, 22.5, 23.4, 22.7, 23.3, 23.7, 22.5, 23.1, 23.2, 23.5, 24.4, 22.9, 23.4, 25.0, 25.9, 25.1, 25.3, 27.6, 28.5, 28.5, 27.1, 26.4, 28.2, 27.4, 26.2, 26.1, 25.6, 25.8, 25.7, 25.4, 24.8, 26.1, 26.4, 27.7, 27.7, 28.1, 28.1, 26.4, 25.9, 27.2, 27.7, 27.5, 27.1, 28.2, 27.4, 26.6, 27.5, 26.8, 27.7, 27.5, 29.5, 29.5, 29.4, 29.9, 32.3, 31.0, 30.9, 31.4, 31.0, 31.3, 31.7, 32.1, 31.5, 32.6, 32.8, 32.8, 32.9, 33.4, 32.8, 31.7, 33.4, 32.9, 33.4, 34.2, 35.7, 36.6, 37.1, 37.5, 37.9, 38.1, 38.2 Though we observe that the difference in log-eigenvalues between the hallucinated and truthful responses is not entirely monotonic throughout the token sequence, we observe that the log-eigenvalues corresponding to the hallucinated response are consistently larger over the whole sequence when compared to the log-eigenvalues arising from the truthful response. We observe this phenomenon in greater generality, such as for hallucinated responses of different lengths compared to truthful ones. We highlight that this is a key advantage of using the mean-log-determinant which normalizes using the length of the token sequence, as compared to scoring methods such as Negative-log-likelihood which do not explicitly account for varying sequence lengths. We shall include and highlight these explanations more extensively with the help of figures and plots in the final version of the paper; here we had to explain in text since we cannot update the rebuttal PDF during the discussion period. We also thank the reviewer for pointing out related works [1] and [2], which we shall certainly include in the final version of the paper. We note that [1] utilizes the persistence homology dimension (PHD) from topological data analysis, specifically to differentiate the inherent dimensionality for real and AI-generated texts, distinct from the detection of hallucinated versus truthful, grounded responses. On the other hand, [2] proposed to utilize a Distance-aware maximum likelihood estimation (MLE) for the Local Intrinsic Dimension, by fitting to a Poisson distribution, where the rate of the Poisson process is parameterized by the intrinsic dimension, in order to determine the truthfulness of model responses. We do note key differences, such that [2] requires explicit optimization (minimization of heteroskedastic weighted polynomial regression is performed), and utilizes the latent space representation corresponding to the last token alone, in sharp contrast to our proposed method which considers the latent representations over the entire token sequence. Thus, [2] is more similar to INSIDE and standard classifiers on trained hidden representations, as compared to our work. We further note that [2] appeared on arxiv quite recently (Feb 28th 2024), and we had not found it prior to the NeurIPS submission deadline. Nevertheless, we shall certainly include this work as suggested by the reviewer in the final version of the paper.
Summary: This paper proposes LLM-Check, a hallucination detection method that requires a single response for both black-box and white-box settings. Specifically, LLM-Check inspects the internal hidden representation, attention map, and output token uncertainty of an auxiliary LLM and derives scores for detecting hallucination. Experiments on benchmark datasets such as FAVA and RAGTruth verified the effectiveness of LLM-Check. Strengths: - LLM-check does not require external knowledge sources or the generation of multiple responses. - LLM-check shows good empirical performance on benchmark datasets. Weaknesses: - The method is grounded on heuristics without providing sufficient insight or justification. For instance, the detection method is based on the key assumption that the log determinant of the token-level hidden representations or the kernel similarity map of self-attention is distinct between hallucinated and non-hallucinated responses. However, it is unclear why this would be the case. - In the black-box setting where the internal activations of the LM are inaccessible, an additional LM is used as a proxy. From the reviewer’s understanding, this is making some implicit assumptions such as the two models are trained on some similar data distribution which may not always hold. - Multiple variants of hallucination scores are proposed and the experiments demonstrated that these scores show different performances across datasets/model types. It is unclear how they associate with each other and whether they can be combined to provide a more holistic view of hallucination. Technical Quality: 2 Clarity: 2 Questions for Authors: - How would the logit entropy score compare with standard uncertainty metrics such as the negative log-likelihood, and sampling-based uncertainty measures such as predictive entropy and semantic entropy? - How to interpret the results in Table 4? In particular, why do the black-box detection settings perform better than the white-box setting? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors discussed the limitation of requiring access to proxy LLM in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We are encouraged that the reviewer appreciates the efficacy of the proposed method across diverse detection settings and datasets, and the significant performance improvements achieved while requiring only a fraction of the computational cost (speedups upto 45x and 450x) as the proposed method does not utilize multiple model responses or extensive external databases. We respond to the questions raised below: **Insight on the Eigen-Analysis based Detection:** - In this work, we study the highly challenging task of hallucination detection within a single example - of the form $(x_p \oplus x)$ for a prompt $x_p$ and corresponding model response $x$ - without any training or inference time overheads. While LLMs do hallucinate, they still have a significantly appreciable degree of world-knowledge grounded on truthful facts encountered in the training stage, which is reflected in the fact that the hallucinations are absent in some of the autoregressively generated sample outputs when multiple model responses are considered for the same prompt. This also hints that one of the potential root-factors that induces hallucination could be from the nature of auto-regressive sampling for generation in LLMs, since tokens once generated and selected at a given point in the output sequence cannot be overwritten or corrected at a later stage, and the LLM simply attempts to maximize the likelihood of the overall response moving from that point on, though it potentially subtly sensitive to the reality that a non-factual error is already present in the response. - Furthermore, this lack of self-consistency across multiple generated sample outputs forms the basis for consistency-based methods such as SelfCheckGPT and INSIDE. In contrast, in this work, we propose to directly analyze the variations in model characteristics between truthful and hallucinated examples by analyzing the rich semantic representations present in the hidden activations of LLMs, since we know that the LLM does generate the truthful version, albeit with potentially lower frequency. Thus, we posit that the model is in fact sensitive to the presence of non-factual, hallucinated information in responses which would otherwise be truthful, based on factual precepts encountered in its training and that this can then be efficiently leveraged to perform hallucination detection within a single model response itself. In particular, these differences would arise in the hidden latent-space representations and the pattern of attention maps across different token representations in hallucinated responses compared to non-hallucinated responses. To quantitatively capture these variations, we thus proposed to analyze the covariance-matrix for hidden representations, and the kernel similarity map of self-attention, since these form the foremost and paramount salient characteristics of the LLM itself. - In addition, given that we require the method to be highly compute-efficient to enable real-time detection, we distill simple yet salient scalars - the mean log-determinant - from these variations in hidden representation and attention maps using eigen-analysis. Theoretically, this is well-motivated as the eigenvalues and singular values capture the interaction in latent space between the different token representations, which we know is different in hallucinated samples which contain non-truths compared to non-hallucinated sample sequences. We also remark that the Attention score can be found without explicitly computing the SVD or eigen-decompositions, enabling speedups between 44x and 450x compared to existing baselines, while also simultaneously achieving significantly improved detection performance. **Comparing with other Uncertainty Metrics:** - We thank the reviewer for the suggestion. We do first note that the Perplexity score is closely related to the overall negative log-likelihood, namely that the Perplexity is defined as the exponentiated length-normalized negative log-likelihood of a sequence. However, we did measure the detection performance of standard negative log-likelihood as suggested by the reviewer, and we observe AUROC, Accuracy and TPR@5%FPR metrics of 52.29, 54.19 and 2.99 respectively on the FAVA Annotation Dataset, which are worse than those obtained with Perplexity (AUROC, Accuracy and TPR@5%FPR of 53.22, 58.68 and 3.59 respectively). --- Rebuttal 2: Title: Rebuttal Continued (1) Comment: **Comparing with other Uncertainty Metrics:** - We also thank the reviewer for the suggestion of sampling-based uncertainty metrics such as Predictive Entropy [1] and Semantic Entropy [2]. However, we observe that both these metrics quantify the total uncertainty over the distribution of all possible responses given a fixed prompt, in sharp contrast to the primary focus of this paper, which analyzes the hallucinatory behavior within a single fixed model response output on a given prompt. Thus, Predictive Entropy and Semantic Entropy perform population-level uncertainty estimation akin to INSIDE, and does not apply to the single-response case. - Furthermore, Semantic Entropy is likely very costly to estimate for real-time detection, since multiple sequences have to first be generated, and then clustered based on bi-directional entailment again using a language model, and then finally estimating entropy over the different clusters. In a some manner, something similar Semantic Entropy based estimation for a single response is performed in “SelfCheckGPT with BERTScore” and “SelfCheckGPT with NLI” in their original paper, but these are seen to perform worse than their best scoring method “SelfCheckGPT with Prompt”. We note that LLM-Check performs better than SelfCheckGPT-Prompt as well (Tables-2,3). However, we do remark that Predictive Entropy and Semantic entropy might possibly be useful to mitigate hallucinations in training phase rather than in detection, by utilizing fine-tuning and down-weighting excessive overall semantic entropy over all possible observed responses to a given fixed prompt. [1] Uncertainty Estimation in Autoregressive Structured Prediction, Malinin and Gales, 2020 [2] Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation, Kuhn et al., 2023 **Interpreting Table-4 and Black-Box results:** In this setting, we utilized the Llama2-7B model for computing the various LLM-Check scores, and analyzed the detection performance of hallucinations present in texts generated from different models like Llama2-13B, GPT-4, Mistral-7B, and Llama2-7B from the RAGTruth dataset. Thus, the white-box setting is presented in the third column where the evaluation model and generation model are the same, and the remaining columns represent the black-box evaluations. Overall, we observe an interesting trend that the hallucinations in responses generated by larger models are easier to detect overall, though the frequency with which they hallucinate is lower. Thus as a result, the overall black-box results appear to be better than the white-box detection. In particular, when the model size is kept unchanged, the white-box Llama-2-7B detection results are very similar to the black-box detection of hallucinated responses from Mistral-7B. This also potentially indicates that models may be more sensitive to hallucination based outliers generated by other models due to their inherent differences in representation and training. **Implicit Modeling in Black-box setting:** We certainly agree with the reviewer that in the black-box setting, implicit modeling assumptions are made with respect to the data distribution of the original and proxy model utilized (i.e. that they are trained on some similar data distribution), and that this may not always hold. However, we do observe that recent real-world LLMs such as from the Llama, GPT, and Claude family are trained on truly staggering amounts of data and have a significant degree of world-knowledge, as validated by their excellent performance on different benchmarks. Thus, on an extremely large variety of use-cases, these models do indeed have overlapping knowledge bases, to an extent where such transfer modeling is indeed effective, as seen with the black-box detection results in Table-4. Furthermore, we note that while methods such as INSIDE cannot inherently be used in the black-box setting, other methods such as SelfCheckGPT do indeed analyze the black-box setting too, since access to proprietary LLMs is expected to be fairly limited even moving forward. --- Rebuttal 3: Title: Rebuttal Continued (2) Comment: **Comparing Eigenvalue Analysis of Internal LLM Representations and Output Token Uncertainty Quantification:** - We propose these two distinct lines of analysis towards hallucination detection in order to attempt to adequately capture the extremely heterogeneous and diversified forms of hallucination displayed by modern Large Language Models over different domains. Towards this, the Eigen-analysis of internal LLM representations helps highlight the consistent pattern of modifications to the hidden states and the model attention across different token representations in latent space when hallucinations are present in model responses as compared to truthful, grounded responses. - On the other hand, the uncertainty quantification of the output tokens helps analyze hallucinations based on the likelihood assigned by the model on the actual tokens predicted at a specific point in the sequence generated auto-regressively. Indeed, especially considering that the LLM is trained using next-token prediction, we expect the probability distribution $p_f (\cdot |x)$ for a given token to be highly salient toward the relative choices available for completion, and in identifying non-factual components if present. - Thus, we utilize these diversified scoring methods from different model components to potentially maximize the capture/detection of hallucinations amongst its various forms without incurring computational overheads at training or inference time. In practice, we generally observe that the Eigen-based analysis of Internal LLM representations achieves better detection performance over Output Token Uncertainty Quantification, as observed in Table-2 with the FAVA annotation dataset (without external references) and in Table-4 with the RAGTruth dataset (with external references). - However, on the FAVA Train-data-split where hallucinations are inserted synthetically using GPT-3, the output-based uncertainty scoring methods are seen to be more effective at capturing the modified changes to the joint-distribution of the sequence level token prediction probabilities, as shown in Table-5. However, we do believe that this synthetic insertion of hallucinations is perhaps less likely to be encountered in real-world settings. **Combining Different Detection Methods:** We thank the reviewer for the suggestion to try to combine these different methods towards a unified, holistic form of hallucination detection. We observe that this can be reduced to learning a classifier that takes the different scores as input and is trained to predict the absence/presence of hallucinations. We conducted preliminary experiments towards this on the FAVA Annotation dataset using Logistic Regression, but observed no appreciable gains over using the best performing individual method, the Attention Score. We seem to observe that learning an optimal combination of the different scores in a manner that it generalizes well across diverse truthful and hallucinatory inputs is a highly non-trivial task, especially considering the different forms of hallucination highlighted in the fine-grained taxonomy presented in the FAVA dataset. We thank the reviewer for the detailed suggestions and constructive comments. We kindly ask if the reviewer would consider increasing their score if their concerns or questions have been addressed. We would be glad to engage further during the discussion period. --- Rebuttal Comment 3.1: Comment: Thank the authors for responding and sharing their perspectives. I have increased my score. I invite the authors to incorporate these clarifications and discussions into their final version. --- Reply to Comment 3.1.1: Comment: We thank the reviewer for the response. We will certainly incorporate the suggested clarifications and discussions in the final version of the paper. We thank the reviewer for raising their score, and for supporting acceptance of the paper.
Summary: The paper explores the challenge of hallucinations in large language models (LLMs), which are outputs that appear plausible but are inaccurate or fabricated. The paper conducts a comprehensive investigation into the nature of these hallucinations and proposes novel, compute-efficient methods for detecting them. Unlike previous approaches, which relied on multiple model responses or extensive databases, this study introduces techniques to detect hallucinations within a single response without requiring external data. It leverages analyses of the model's internal mechanisms such as hidden states, attention maps, and output prediction probabilities. The proposed methods are validated across several settings and datasets, demonstrating significant improvements in detection performance while maintaining lower computational costs compared to existing methods. Strengths: 1. **Compute Efficiency:** The proposed methods are designed to be highly compute-efficient, requiring only a fraction of the run-time compared to other baseline approaches. This efficiency makes them suitable for real-time analysis. 2. **Single Response Analysis:** Unlike many previous methods that require multiple responses or extensive external databases, the proposed techniques can detect hallucinations within a single model response. This capability is crucial for applications where generating multiple responses is impractical due to time or resource constraints. 3. **Versatility Across Settings:** The methods are tested and proven effective in both white-box and black-box settings. This flexibility ensures that they can be applied in various scenarios, regardless of the level of access to the model's internal workings. Weaknesses: 1. **Lack of Cohesion Between Methods:** The paper introduces two main approaches for detecting hallucinations: Eigenvalue Analysis of Internal LLM Representations and Output Token Uncertainty Quantification. One approach analyzes hallucinations from the perspective of hidden representations, while the other focuses on the probability distribution of output tokens. However, there is no clear link or cohesion between these two approaches, which might lead to a fragmented understanding of how these methods can be effectively integrated or compared. 2. **Sensitivity to Layer Selection:** As shown in Appendix D and Table 2, the performance of eigenvalue-based scores is sensitive to the choice of layers in the model. This sensitivity means that selecting the optimal layer for analysis can be computationally expensive and model-dependent. Since the performance varies significantly across different layers and models, extensive testing and validation across multiple configurations are necessary, which increases the computational overhead and complexity of deploying these methods in practical settings. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In line 339, the authors mention that "different detection methods can be the most advantageous, depending on the problem setup." Could you please elaborate on the specific scenarios in which attention-based methods perform better compared to logit-based methods, and vice versa? Additionally, is there potential to combine these two approaches to enhance the model's uncertainty evaluation effectively? 2. Given the sensitivity of eigenvalue-based scores to the choice of layers, as discussed in the appendices and Table 2, are there any rapid methods for selecting the optimal layer for analysis? Furthermore, does the choice of layer have generalizability across different models or tasks? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: As mentioned in Appendix A, the methods introduced in this paper rely on accessing internal model representations or the probability distributions of tokens to assess hallucinations. This requirement can pose significant challenges when dealing with commercial models such as GPT-4 or Claude, where internal details are not readily available. In scenarios lacking sufficient internal information, the accuracy of the evaluation methods could be compromised, or it might even be impossible to perform assessments effectively. Additionally, while the methods provide robust tools for detecting hallucinations in LLM outputs, they do not directly address strategies for mitigating these hallucinations. Designing effective strategies to eliminate or reduce hallucinations in LLMs, based on the insights gained from our evaluation methods, represents an intriguing area for future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We are encouraged that the reviewer appreciates the compute-efficiency of the proposed method, and its practicality and versatility towards real-time hallucination detection within a single model-response across diverse settings. We respond to the questions raised below: **Comparing Eigenvalue Analysis of Internal LLM Representations and Output Token Uncertainty Quantification:** - We propose these two distinct lines of analysis towards hallucination detection in order to attempt to adequately capture the extremely heterogeneous and diversified forms of hallucination displayed by modern Large Language Models over different domains. Towards this, the Eigen-analysis of internal LLM representations helps highlight the consistent pattern of modifications to the hidden states and the model attention across different token representations in latent space when hallucinations are present in model responses as compared to truthful, grounded responses. - On the other hand, the uncertainty quantification of the output tokens helps analyze hallucinations based on the likelihood assigned by the model on the actual tokens predicted at a specific point in the sequence generated auto-regressively. Indeed, especially considering that the LLM is trained using next-token prediction, we expect the probability distribution $p_f (\cdot |x)$ for a given token to be highly salient toward the relative choices available for completion, and in identifying non-factual components if present. - Thus, we utilize these diversified scoring methods from different model components to potentially maximize the capture/detection of hallucinations amongst its various forms without incurring computational overheads at training or inference time. In practice, we generally observe that the Eigen-based analysis of Internal LLM representations achieves better detection performance over Output Token Uncertainty Quantification, as observed in Table-2 with the FAVA annotation dataset (without external references) and in Table-4 with the RAGTruth dataset (with external references). - However, on the FAVA Train-data-split where hallucinations are inserted synthetically using GPT-3, the output-based uncertainty scoring methods are seen to be more effective at capturing the modified changes to the joint-distribution of the sequence level token prediction probabilities, as shown in Table-5. However, we do believe that this synthetic insertion of hallucinations is perhaps less likely to be encountered in real-world settings. **Combining Different Detection Methods:** We thank the reviewer for the suggestion to try to combine these different methods towards a unified form of hallucination detection. We observe that this can be reduced to learning a classifier that takes the different scores as input and is trained to predict the absence/presence of hallucinations. We conducted preliminary experiments towards this on the FAVA Annotation dataset using Logistic Regression, but observed no appreciable gains over using the best performing individual method, the Attention Score. We seem to observe that learning an optimal combination of the different scores in a manner that it generalizes well across diverse truthful and hallucinatory inputs is a highly non-trivial task, especially considering the different forms of hallucination highlighted in the fine-grained taxonomy presented in the FAVA dataset. --- Rebuttal 2: Title: Rebuttal Continued Comment: **Layer Selection:** We thank the reviewer for the valuable comment and suggestion. We first wished to clarify that the method is computationally efficient even when all model layers are used for computing Hidden or Attention scores. Indeed, the runtime analysis presented in Figure-2, includes the empirical average runtime for computing Attention scores and Hidden scores from all 32 layers of Llama-2-7b. That is, the Attention score computation for all 32 layers takes 0.22 seconds per example, while the Hidden score computation for all 32 layers requires 2.72 seconds per example when averaged over the FAVA Annotation dataset. Since the overhead to compute scores for all layers was so small, we expect that they can be utilized for real-time analysis. - But we certainly agree with the suggestion that rapid methods for selecting layers would be beneficial in practice. We first observe that the performance obtained with the Hidden score is extremely stable across layers, and thus it is relatively easy to choose, though we recommend a middle-level layer such as layer 20 for a 32 layer, 7-billion parameter model such as Llama-2. On the other hand, we do indeed observe a larger degree of oscillations across layers with the Attention score. Here, we perform an experiment to potentially rapidly select layers, by plotting the results obtained using few samples, and subsequently check if the overall performance on the dataset can be estimated using this for each layer. We present these evaluations in Figure-2 of the rebuttal PDF. We do indeed observe a fair degree of agreement between results obtained with 5, 20 and 50 pairs as compared to the full dataset. - In general, we do observe that the layers between 19 and 23 achieve close-to-optimal performance for the Attention score computed on 32 layer, 7-billion parameter models such as Llama-2-7b and Vicuna-7b, and the similar Llama-3-8b model for white-box detection across different datasets. We hypothesize that for the white-box setting, while the very early layers are involved in feature extraction, and the last layers are involved more towards making an optimal next-token prediction, the layers after the midpoint of the network are quite suitable for hallucination detection. In the black-box setting, this is much more difficult since it is non-trivial to map representations of intermediate layers between different LLMs, especially when the original LLM has many more layers, as with GPT-4 or Llama-2-70B. We thank the reviewer for the support for acceptance and greatly appreciate the suggestions and constructive comments. We kindly ask if the reviewer would consider increasing their score if their concerns or questions have been addressed. We would be glad to engage further during the discussion period. --- Rebuttal Comment 2.1: Title: Official Comment by Reviewer gixa Comment: Thank you very much for the detailed response and the additional experiments provided. I believe these further explanations and experiments significantly enhance the quality of the paper. Consequently, I have increased my rating from 5 to 6. I hope the author can integrate these suggestions into the final manuscript. --- Reply to Comment 2.1.1: Title: Thank you! Comment: We sincerely thank the reviewer for the valuable suggestions and detailed feedback. We will certainly incorporate these explanations and experiments into the final version of the paper. We thank the reviewer for raising their score, and for supporting acceptance of the paper.
Summary: The paper presents a comprehensive study on the detection of hallucinations in outputs produced by large language models (LLMs). The authors propose a method, LLM-Check, which aims to identify hallucinations within a single response of an LLM by analyzing internal hidden states, attention maps, and output prediction probabilities. The study evaluates LLM-Check in various settings, including scenarios with and without access to ground-truth references and in both white-box and black-box settings. The results demonstrate that LLM-Check is computationally efficient and achieves significant improvements in detection performance over existing baselines across diverse datasets. Strengths: 1. The paper introduces a novel method for detecting hallucinations using internal states and attention maps, which is less computationally intensive compared to prior methods requiring multiple responses or large databases. 2. LLM-Check is shown to be highly efficient, requiring only a fraction of the runtime of other baseline methods. Weaknesses: 1. The paper lacks a figure illustrating the overall pipeline, which makes it hard to grasp the main idea at first glance. 2. Some terminologies, such as "white-box settings," "black-box settings," and "population-level detection," are not well explained, making it hard to understand the concepts without prior knowledge. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Do you have any preliminary thoughts or plans on how the detection method could be extended to also mitigate hallucinations? 2. What does population-level mean in hallucination detection? 3. What are the differences between the black-box settings and white-box settings? What do the hidden score, attention score indicate? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We are glad that the reviewer found the proposed method to be novel, and is effective towards hallucination detection in various settings, while being extremely efficient computationally. We respond to the questions raised below: > The paper lacks a figure illustrating the overall pipeline, which makes it hard to grasp the main idea at first glance. - We sincerely thank the reviewer for the suggestion, and as suggested we include a schematic of the eigen-analysis detection methods as Figure-1 in the rebuttal PDF. We shall certainly incorporate the figure into the final version of the paper. > Do you have any preliminary thoughts or plans on how the detection method could be extended to also mitigate hallucinations? - We thank the reviewer for this question. We anticipate that LLM-Check could be directly incorporated towards providing additional automated feedback in the fine-tuning stage of LLMs with Reinforcement Learning, wherein output sample generations that are detected to be hallucinatory in nature can be down-weighted appropriately. Additionally, the detection methods can assist in flagging samples in a highly efficient manner towards a customized human-feedback loop with RLHF, wherein annotators can introduce an orthogonal ranking which reflects the desired extent of factuality for the sample generations so considered. > What does population-level mean in hallucination detection? - As we note in Section 3.2, the prior method INSIDE performs hallucination detection by generating multiple stochastically sampled responses $x_1 , x_2 , . . . x_K$ for a given prompt $x_p$, and then computes eigen-decomposition of the covariance matrix of hidden activations of these samples. This is used to then statistically infer the presence of hallucinations within this generated sample set $x_1 , x_2 , . . . x_K$, based on a possible lack of self-consistency at a “population level” between samples within this specific set. This is in contrast to single-response detection that infers the presence/absence of hallucinations in a given fixed (single) output response $x$ for a given prompt $x_p$, as performed in our method, LLM-Check. > What are the differences between the black-box settings and white-box settings? What do the hidden score, attention score indicate? - For a given prompt $x_p$, we consider hallucination detection in an output response $x$ that is generated by a given LLM $f$. If the original LLM $f$ is available and accessible to compute the scores proposed with LLM-Check, the setting is considered to be “white-box”. If the original LLM $f$ that generated the response $x$ for prompt $x_p$ is no longer available or inaccessible, we utilize an auxiliary substitute LLM $\hat{f}$ (such as open-source LLMs like Llama-2) to compute scores with internal activations and attention kernel maps using teacher-forcing, and this setting is considered to be “black-box”. The hidden score and attention score are real-valued scalar scoring metrics that can be then thresholded to determine the presence/absence of hallucinations in a given output response $x$, using the original LLM $f$ itself in the white-box setting, and an auxiliary substitute LLM $\hat{f}$ in the black-box setting. The hidden scores and attention scores are derived from the mean log-determinant of the covariance-matrix for hidden representations, and the kernel similarity map of self-attention of the LLM. Theoretically, this mean log determinant is given by the average logarithm of the corresponding singular values and eigenvalues which capture the interaction in latent space between the different token representations, which we know is different in hallucinated samples containing non-truths compared to non-hallucinated sample sequences. We sincerely thank the reviewer for the support for acceptance and greatly appreciate the suggestions and constructive comments. We kindly ask if the reviewer would consider increasing their score if their concerns or questions have been addressed. We would be glad to engage further during the discussion period. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I thank the authors' detailed response and expect that my suggestions for clarity will be incorporated into the next version. --- Reply to Comment 1.1.1: Comment: We are glad that you found our rebuttal detailed and helpful. We will certainly incorporate the suggestions and clarifications in the final version of the paper. Once again, we thank you for your support for acceptance of the paper.
Rebuttal 1: Rebuttal: **A note to all Reviewers** We sincerely thank the reviewers for their valuable feedback and constructive comments on our paper. We are glad to note that the reviewers appreciate the comprehensive study into the nature of hallucinations in LLMs, and the practicality and effectiveness of the novel LLM-Check detection method so proposed, as validated across diverse detection settings and datasets. Furthermore, we are glad that the reviewers appreciate the significant improvements in detection performance achieved by LLM-Check over existing baselines, while requiring only a fraction of the computational cost (speedups upto 45x and 450x) as the proposed method does not utilize multiple model responses or extensive external databases. We greatly appreciate the valuable comments and detailed suggestions, and we will diligently incorporate them in the final version of the paper. Pdf: /pdf/9b93a616039e3c520d1abc3e527be78ff24d6cfa.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Effective Planning Strategies for Dynamic Opinion Networks
Accept (poster)
Summary: This manuscript investigates intervention planning, a critical issue in complex social networks. To address this challenge, the authors introduce a novel ranking algorithm and a reinforcement learning-based dynamic planning framework. Three cases of opinion and trust values are considered, enhancing the method's applicability in real-world scenarios. The performance of the developed method is demonstrated through comprehensive simulations on synthetic networks. Strengths: This manuscript is well-written and engaging. The contributions of this work and the rationale for considering different models and factors are clearly illustrated. Various scenarios are accounted for to demonstrate the method's superiority and applicability. Four research questions are reasonably addressed in the manuscript. Overall, this work is novel and interesting and deserves to be published in NeurIPS. Weaknesses: One concern I have is the limited size of networks analysed in the simulation. As stated by the authors, “as network size increases, the problem becomes computationally intractable” (Line 6). Why do the authors only consider synthetic networks with fewer than 50 nodes in the simulation? Does this limitation affect the potential application of this method in real-world scenarios? Please provide reasonable explanations for this point. Additionally, the performance should be compared with state-of-the-art (SOTA) methods. Technical Quality: 4 Clarity: 4 Questions for Authors: Additional questions and suggestions: 1. The authors mention that intervention planning involves two parts. However, it is not entirely clear how “exerting control” is considered in this work. Does it only mean adding green nodes in Figure 1? Please explain this more clearly. 2. Only synthetic networks are considered to evaluate the developed method's performance, which could limit its applicability. The authors should consider some real-world networks, even without real-time opinion propagation data, to better demonstrate the method's superiority. 3. In future work, real-time opinion propagation data should be considered, which could further enhance this study. This is not a critique of the current work but a suggestion for future research. 4. A significant limitation is the size of networks analysed. The largest network contains only 50 nodes, which could impact the evaluation. Real-world networks typically have more than one million users (or at least more than one thousand users). The authors should explain how this method can be applied to real-world scenarios and justify why only networks with fewer than 50 nodes were analysed. 5. What is the time complexity of this method? Is it difficult to analyse large-scale networks? Please provide an analysis of its time complexity and the resources needed for simulation. 6. Can this method be compared with SOTA methods? 7. Is Figure 19 correct? All bars and subfigures appear to be the same. Please verify this. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing our contributions and acknowledging our effort to cover various scenarios in our evaluation. We are also thankful for the constructive feedback and suggestions for future work. Yes, Figure 19 is correct. The barplot displayed in Figure 19 depicts the mean infection rate for different reward functions on Dataset v2 of Degree of connectivity 1, differentiated by the number of initial misinformation sources (Inf.) and action budgets (Act.: a1, a2, a3). We observe an identical average infection rate in all cases because each model successfully stops the spread of misinformation in the first timestep. This can be attributed to low degree of connectivity (one) of the initial infected node. **Remark 1, Question 2 & Question 4: Scalability and results from real-world networks** In our work we develop planning algorithms and analyze their efficacy using synthetic data in controlled settings. The synthetic data used in our study are generated using Watts-Strogatz (or small-world) network model. As demonstrated in various studies, this network model captures the characteristics of common social and community networks [1]. Additionally, we have also evaluated our planning algorithms using directed and undirected real-world network models reported in the literature. These evaluations are included in the rebuttal PDF (Table 2). These additional experiments confirm the major findings reported in our paper (Research Questions in Section 5). Furthermore, the Graph Convolutional Networks (GCNs)-based planners developed in our work perform effectively when applied to larger dynamic network models (as illustrated in Table 1 of our paper and Table 2 of rebuttal PDF) even when they are trained using data generated from smaller networks (RQ4 from Section 5). Testing with GCNs on larger networks is significantly simpler and less computationally intensive than training, making it a practical solution for scaling our approach to handle extensive network sizes efficiently. Finally, we would like to emphasize that existing research work [eg., citation 14 from our manuscript] has analyzed opinion networks with a maximum of 1005 nodes with an opinion propagation model that considers only binary opinion and trust values (Case 1 in our paper in Section 2.1). In contrast, we have also considered more expressive opinion network models (Case 2 and Case 3 -- Section 2.1) in our study. **Question 1: Details of World Model** In our work, "exerting control" refers to applying interventions (disseminating accurate information) for a selected node within the network to combat misinformation. Specifically, this involves adding an external input to the propagation model. For example, when agent $k$ communicates with agent $i$, agent $i$’s opinion is updated using the propagation model given in Equation 1 from our paper. If the planner chooses agent $i$ as the candidate node for intervention, exerting control implies an external input applied to this model and it will be governed by the following discrete map - $x_i(t+1) = x_i(t) + \mu_{ik}(x_k(t) - x_i(t)) + \mu_{ij}(x_j(t) - x_i(t))$ $\mu_{ij}$ is the trust value associated with the external source $j$ sharing official information. $x_j$ is the associated opinion value. **Question 5: Time Complexity Analysis** The time complexities for the two main components of our method—the Ranking Algorithm and the Deep Value Network (DVN)—are as follows: Ranking Algorithm (Algorithm 1 in Appendix Section A.5.1): The time complexity is $O(V \times (V + E))$, where $V$ represents the number of vertices and $E$ is the number of edges in the network. Deep Value Network (DVN) (Algorithm 2 in Appendix Section A.5.2): The overall complexity of the DVN with experience replay is $O(e_{\text{max}} \times (V \times (V + E) + nm))$, where $e_{\text{max}}$ is the maximum number of episodes, $n$ is the batch size, and $m$ represents the computational complexity of operations within the neural networks. Analyzing large-scale networks with our Ranking Algorithm and Deep Value Network (DVN) can be computationally demanding. The quadratic dependency on the number of nodes $V$ and the involvement of all edges $E$ in the Ranking Algorithm, and the complex operations of the DVN, make the method resource-intensive. As specified earlier, we find that GCNs trained with data generated from smaller networks are useful for intervention planning even when the size of the opinion network increases. Due to the richness of the dynamic models in Cases 2 and 3, intervention planning with these expressive opinion propagation models were computationally more demanding than in Case 1 (which are commonly studied in the literature). **Question 6: Comparison with existing methodologies** Most prior research has only addressed static network scenarios (primarily Case 1), without considering the rich dynamics of Cases 2 and 3 (see Section 2.1 for the distinction). This is the first study to incorporate such a rich dynamic nature of opinion networks in the study of intervention planning strategies. We have reported a comparative analysis of our Ranking Algorithm based Supervised Learning with three other planning strategies (random, max_degree_static, max_degree_dynamic, see Figures in Appendix section A.6.1) not just for Case 1 but also for Cases 2 and 3. In addition, we have also reported a comprehensive analysis (by varying degrees of initial infected nodes and action budgets) of our Reinforcement Learning-based Centralized Dynamic Planner across various reward models for all three cases. This includes results of static network scenarios and candidate nodes as reward models which are considered in the existing work [citation 14 from our paper]. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. The authors have addressed most of my concerns.
Summary: The paper investigates intervention planning aimed at disseminating accurate information within dynamic opinion networks using learning strategies. It introduces a novel ranking algorithm to identify key nodes for disseminating accurate information and develops a Reinforcement Learning (RL)-based dynamic planning framework. The framework is tested on networks governed by two propagation models, incorporating both binary and continuous opinion and trust representations. The experimental results indicate that the proposed strategies can enhance infection rate control, especially with increased action budgets. Strengths: 1.Timely Topic: The paper addresses the significant issue of misinformation spread in social networks, which is a pressing problem in today’s digital age. 2.Thorough Analysis: The paper presents a solid and detailed analysis of the proposed methods, providing in-depth insights into their effectiveness and behavior under various conditions. Weaknesses: 1.Lack of Innovation: The primary issue with the paper is the lack of significant innovation. The methodologies proposed are incremental improvements over existing approaches rather than groundbreaking new techniques. 2.Comparison with Existing Methods: The paper does not sufficiently compare its methods with existing state-of-the-art techniques. This makes it challenging to assess the true novelty and effectiveness of the proposed approaches. Technical Quality: 2 Clarity: 3 Questions for Authors: 1.How does the proposed RL-based dynamic planning framework compare with other existing state-of-the-art methods for misinformation control in terms of both performance and computational efficiency? 2.Can the proposed methodologies be scaled effectively to much larger networks, and what are the computational requirements for such scaling? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Highlight Novelty: Clearly articulate the novel contributions of the paper. It would be beneficial to focus on what sets this work apart from existing research in the same area. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for pointing out the novelty of the ranking algorithm and the thoroughness of the analysis presented in the paper. In the following, we address the specific concerns on novelty, comparisons with existing literature, and the question on scalability. **Remarks 1 & 3: Novelty** Existing research on planning problems in the context of opinion networks often overlooks key features of opinion propagation models, such as their rich network dynamics, asynchronous communication, and the impact of factors like the degree of infected nodes, action budget, and various reward models on the effectiveness of planners. Our work addresses this gap by analyzing three distinct cases of opinion network models (Section 2.1)—ranging from binary to continuous spectra of opinion and trust values—and asynchronous communication (Section 2.2). This approach introduces greater richness and expressiveness to opinion dynamics models, making the model-based analysis of opinion propagation more reflective of real-world scenarios. Furthermore, we develop comprehensive datasets with a wide range of Watts-Strogatz (or small-world) network topologies (Section 4.2), varying degrees of initial infected nodes, action budgets, and reward models—from those requiring local network information to those utilizing global real-time network states (Section 3.2.1). From an algorithmic perspective, we extend the static ranking algorithm based on the classical BFS to the problem of centralized planning in dynamic opinion networks. Supporting supervised learning-based planners, this algorithm can help planners when the network size is small and the global network information is available. To understand planning strategies for large-networks, we also investigate reinforcement-learning-based centralized planners. In particular, our study explores five distinct reward structures, enhancing our understanding of reward dynamics and their computability (based on access to local vs global network information) that may be applicable to diverse applications depending on network observability and size. Further, the use of Deep Value Networks (DVN) proves more appropriate for our chosen application compared to the traditional and popular architecture of Deep Q-Networks (DQN), which has been investigated to study planning problems for opinion propagation models with simple binary state and trust parameters. DVN-based planners studied in our work are better suited for environments where the number of intervention nodes (action budget) is not fixed and varies dynamically. This flexibility allows DVN to handle dynamic and large-scale networks with greater efficiency as shown in our work [see citation 14 from the manuscript]. This makes our work a significant and non-trivial contribution to the existing literature on this topic. **Remark 2 & Question 1: Comparison with existing methodologies** Most prior research has only addressed static network scenarios (primarily Case 1), without considering the rich dynamics of Cases 2 and 3 (see Section 2.1 for the distinction). This is the first study to incorporate such a rich dynamic nature of opinion networks in the study of intervention planning strategies. We have reported a comparative analysis of our Ranking Algorithm based Supervised Learning with three other planning strategies (random, max_degree_static, max_degree_dynamic, see Figures in Appendix section A.6.1) not just for Case 1 but also for Cases 2 and 3. In addition, we have also reported a comprehensive analysis (by varying degrees of initial infected nodes and action budgets) of our Reinforcement Learning-based Centralized Dynamic Planner across various reward models for all three cases. This includes results of static network scenarios and candidate nodes as reward models which are considered in the existing work [citation 14 from our paper]. **Question 2: Scalability** We have analyzed the performance of the developed planners as the number of opinion network nodes increase. In this context, the GCN-based planners developed in our work perform effectively when applied to larger dynamic network models (as illustrated in Table 1 of our paper and Table 2 of rebuttal PDF) even when they are trained using data generated from smaller networks (RQ4 from Section 5). We would like to emphasize that existing research work [eg., citation 14 from our manuscript] has analyzed opinion networks with a maximum of 1005 nodes with an opinion propagation model that considers only binary opinion and trust values (Case 1 in our paper in Section 2.1). In contrast, we have also considered more expressive opinion network models (Case 2 and Case 3 -- Section 2.1) in our study. **_Computational Requirements_** To answer the computational requirement for scalability we consider the computational intensity of our algorithms. The Ranking Algorithm (Algorithm 1 in Appendix Section A.5.1) has a time complexity of $O(V \times (V + E))$, which increases quadratically with the number of vertices and edges, making it computationally intensive for larger networks. The training of DVN (Algorithm 2 in Appendix Section A.5.2) has a time-complexity of $O(e_{\text{max}} \times (V \times (V + E) + nm))$, where $e_{\text{max}}$​ is the maximum number of episodes, $n$ is the batch size, and $m$ represents the computational complexity of operations within the neural networks. This indicates substantial computational requirements due to the need for simulating network dynamics across multiple training episodes and updating complex neural network parameters repeatedly. As specified earlier, we find that GCNs trained with data generated from smaller networks are useful for intervention planning even when size of the opinion network increases. Due to the richness of the dynamic models in Cases 2 and 3, intervention planning with these expressive opinion propagation models were computationally more demanding than Case 1 (which are commonly studied in the literature). --- Rebuttal 2: Title: Accept Comment: The rebuttal addressed most of my concerns, especially those regarding the scalability of the approach. --- Rebuttal Comment 2.1: Title: Thank you Comment: Thank you for your comments. We are thankful that we could address your concerns.
Summary: This paper studies strategic planning for disseminating credible information within dynamic opinion networks. The main two contributions are (1) a ranking algorithm to identify influential nodes to spread accurate information, and (2) an RL-based framework for adaptive intervention strategies. Strengths: 1. Paper is well-written and easy to follow. 2. The problem at hand is interesting. Weaknesses: 1. One of the main building blocks of the paper is a proposed ranking algorithm to mitigate the computationally intractable issue of intervention planning, while the algorithm itself is just a brute force search, and computationally infeasible. The authors then propose an RL solution to address the problem. My question is what did the authors achieve here? 2. My other concern is about the scalability of the solution as mentioned several times throughout the paper, making it one the main contributions of the paper, without providing further analysis that proves the proposed framework is indeed scalable. The largest network size used in the paper is 50, which makes it hard to believe the method is scalable, especially since their ranking algorithm is brute force (can we quantify scalability other than network size by the way?). I would recommend increasing network size to more than 50, while providing detailed statistics of the graphs (e.g., # edges, diameter of the graph, etc.). Furthermore, I assume the 1000 states used correspond to the dynamic nature of their input graphs. This has to be clarified in the paper. 3. Since the proposed method is claimed to be scalable, it is necessary to include analysis on how varying the number of states (or any other dynamic features of the input graphs) might affect the inference results. 4. Normally, one would expect the R2 reward function to achieve better results as it minimizes the # candidate nodes and infection rate, but as stated by RQ3, this isn’t the case. This requires a better explanation and justification other than just showing numbers in Table 1, especially because according to Table 1, R1,2 and 4 are quite close as far as I can see. Also, in general, except for R0 and R3, differences between others are insignificant. Some kind of significance test is thus recommended to clarify this better. 5. One last question is: does adding directions to the input graphs change anything in the paper? This needs to be shown in the paper using some further analysis and experiments. 6. While the paper is very well-written, it still needs a thorough proof-read to fix some typos and mistakes (e.g., in the RQ3 section, Hypothesis RQ4 needs to be RQ3). 7. I’m not sure if I understood how the Infection rate=0.7 for figure 2 was calculated and compared to that of figure 1. I would recommend adding the infection rate for figure 1 somewhere and explain how to calculate this using an example. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the detailed comments that include some questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: To some extent. Again, please see the questions above as they pose some challenges and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. Thank you for pointing out that the paper is well written and for your valuable suggestions for improving our paper. With regards to the infection rate calculation - it is calculated as the ratio of the number of infected nodes to the total number of nodes within the network at a given timestep, as mentioned in Section A.3. We will clarify this in the main paper, and add this to Figure 1 for consistency. We will also thoroughly proof-read our paper to avoid typographical errors. **Remark 1: Building Blocks** Our paper focuses on two methodologies: Supervised Learning (SL) and Reinforcement Learning (RL). We aimed to extend SL to dynamic network scenarios (Case 2 and 3), which previous research works have not explored. To achieve this, we introduced a ranking algorithm based label generation approach within SL. As discussed in the paper (Section 3.1), while the ranking algorithm provided improvements, it proved computationally infeasible when training on larger networks. To address this, we investigated RL-based solutions (Section 3.2). Based on our analysis we observed that RL-based solution using GCN model performed better in generating effective plans for networks with more nodes even when they were trained with data generated from networks with fewer nodes (RQ4). In summary, we start by extending the static ranking algorithm based on the classical Breadth First Search to the problem of centralized planning in dynamic opinion networks. Supporting supervised learning-based planners, this algorithm can help planners when the network size is small and the global network information is available. To understand planning strategies for large networks, we also investigated reinforcement-learning-based centralized planners. We have reported comprehensive analyses in our work that were helpful in addressing specific research questions (Section 5 RQ1 to RQ4), which were not previously reported in the literature. **Remark 2, 3 & 5: On Scalability and Directed graphs** Following the reviewers' suggestions, we have included additional analysis in the attached PDF. These include results on four real-world network models used in previous works to analyze intervention planning. These include both directed and undirected networks with the number of nodes ranging from 34 to 2000. These experimental results align with the major findings reported in our paper (Research Questions in Section 5). The synthetic data used in our study is based on Watts-Strogatz (or small-world) network topology, which, as demonstrated in various studies, captures the characteristics of many common social and community networks [1]. The network statistics for this model are governed by the Watts-Strogatz network parameters, which are given in Section 4.2 of our paper. The network statistics of the four real-world networks considered are reported in Table 2 of the attached PDF. Further, we have considered two dataset versions (Section 4.2) that account for varying number of infected nodes and varying degree of initial infected nodes. In total we have 21 unique datasets and 1000 samples were randomly generated in each of these datasets for testing. We have analyzed the performance of the developed planners as the number of opinion network nodes increase. In this context, the GCN-based planners developed in our work perform effectively when applied to dynamic network models with more nodes (as illustrated in Table 1 of our paper and Table 2 of rebuttal PDF) even when they are trained using data generated from networks with fewer nodes (RQ4 from Section 5). We would like to emphasize that existing research work [eg., citation 14 from our manuscript] has analyzed opinion networks with a maximum of 1005 nodes with an opinion propagation model that considers only binary opinion and trust values (Case 1 in our paper in Section 2.1). In contrast, we have also considered more expressive opinion network models (Case 2 and Case 3 -- Section 2.1) in our study. Due to the richness of the dynamic models in Cases 2 and 3, intervention planning with these expressive opinion propagation models was computationally more demanding than Case 1. Thus, with regard to scalability, in our work, we mainly focus on the performance of centralized planners when the number of nodes increases. Varying the number of node features or the dimension of the opinion propagation model will introduce factors such as multiple topics and topic-dependencies in opinion networks. Though this is an important problem, this is beyond the current scope and will be investigated in our future study. **Remark 4: On Reward Models** Thank you for highlighting this intriguing result, which challenges the intuitive expectation that the R2 reward function would outperform others by minimizing candidate nodes and infection rate. *Global vs. Local Information:* As highlighted in RQ3 of our paper, our findings show that reward functions utilizing global information do not necessarily benefit from the addition of local information. This is evident with R2, which incorporates both global and local data but does not significantly outperform others using solely global metrics. The same is evident from Figure 7 in our appendix, which compares the MSE loss during training across different reward functions. **R2 (Green)** shows more variance and higher MSE loss, suggesting less stability and efficiency in learning compared to R1, R3, and R4. *Reward Richness and Application:* Although the differences in results for R1, R2, and R4 may appear insignificant, each reward type's computation and informational needs are distinct. While R4 requires the observability of the entire network, R1 only requires the observability of neighboring candidate nodes of the infected nodes. This diversity allows us to cater to various real-world applications, underscoring the necessity of evaluating multiple reward models to gain a comprehensive understanding. --- Rebuttal Comment 1.1: Title: Accept Comment: Thank you for the detailed rebuttal that addressed most of my concerns. --- Reply to Comment 1.1.1: Comment: Thank you for your comments. We are thankful that we could address your concerns.
Summary: This study explores intervention strategies aimed at curbing the spread of misinformation in dynamic social networks. The authors propose a novel ranking algorithm to identify influential nodes for disseminating accurate information and a RL-based framework to address the computational complexity associated with label generation. The paper concludes that by integrating more realistic and complex modeling approaches, label generation techniques, and training methodologies, the proposed strategies can significantly mitigate the impact of misinformation in social networks. Strengths: - The studied problem is of great importance and has considerable practical significance. - The paper introduces a novel ranking algorithm that effectively identifies key nodes within a network for the dissemination of accurate information. This algorithm is integrated with a supervised learning framework, providing a robust method for training neural network classifiers that can scale and generalize across different network structures. - By employing a reinforcement learning-based dynamic planning framework, the paper addresses the computational challenges associated with large networks. This RL methodology allows for the development of adaptive intervention strategies that can respond in real-time to the evolving patterns of misinformation spread, offering a significant improvement over traditional static approaches. - The authors have made the code available and provided detailed hyperparameter settings and computational resource information, enhancing the reproducibility of the study. Weaknesses: - While the proposed models and algorithms demonstrate efficacy in controlled settings, there may be concerns regarding their scalability when applied to real-world scenario. Technical Quality: 3 Clarity: 3 Questions for Authors: Why use ResNet and GCN as backbones? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations and broader impacts in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the technical contributions and novelty of our work and for providing valuable feedback. **Remark 1: Real-world network models** In our work, we develop planning algorithms and analyze their efficacy using synthetic data in controlled settings. The synthetic data used in our study are generated using the Watts-Strogatz (or small-world) network model. As demonstrated in various studies, this network model captures the characteristics of common social and community networks [1]. Additionally, we have also evaluated our planning algorithms using directed and undirected real-world network models reported in the literature. These evaluations are included in the rebuttal PDF (Table $2$). These additional experiments confirm the major findings reported in our paper (Research Questions in Section 5). Furthermore, the Graph Convolutional Networks (GCNs)-based planners developed in our work perform effectively when applied to dynamic network models with more nodes (as illustrated in Table $1$ of our paper and Table $2$ of rebuttal PDF) even when they are trained using data generated from networks with fewer nodes (RQ4 from Section 5). Testing with GCNs on larger networks is significantly simpler and less computationally intensive than training, making it a practical solution for scaling our approach to handle extensive network sizes efficiently. Finally, we would like to emphasize that existing research work [eg., citation 14 from our manuscript] has analyzed opinion networks with a maximum of $1005$ nodes with an opinion propagation model that considers only a binary opinion and trust values (Case 1 in our paper in Section 2.1). In contrast, we have also considered more expressive opinion network models (Case 2 and Case 3 -- Section 2.1) in our study. **Question 1: Usage of ResNet and GCN** Graph Convolutional Networks (GCNs) and Residual networks (ResNet) have been widely used for modeling graph-structured data. Since opinion dynamic models generate graph-structured data, we incorporated these neural network models in our study to develop planners. In our analysis, we found that GCNs offer better scalability and performance when compared with ResNet (RQ4 from Section 5). [1]. Watts, D.J. and Strogatz, S.H., 1998. Collective dynamics of ‘small-world’networks. nature, 393(6684), pp.440-442. --- Rebuttal Comment 1.1: Comment: Thank the authors for their response. I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We wanted to check if our rebuttal addressed the raised concerns. We will be happy to address specific questions.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and suggestions. In this rebuttal, we have tried to address all the specific concerns and comments of individual reviewers. To support our response to the reviewers' comments, we include two additional tables in the attached PDF that are not part of the manuscript. These tables are intended to reinforce key findings, provide supporting evidence for our claims, and help clarify some of the reviewers' comments addressed in our rebuttal. Table 1, in the attached Rebuttal PDF, highlights the key features of our work and their implications, emphasizing aspects not covered in previous studies related to planning in the context of opinion networks available in the literature. Existing research on planning problems in the context of opinion networks often overlooks key features of opinion propagation models, such as their rich network dynamics, asynchronous communication, and the impact of factors like the degree of infected nodes, action budget, and various reward models on the effectiveness of planners. Our work addresses this gap by analyzing three distinct cases of opinion network models (Section 2.1)—ranging from binary to continuous spectra of opinion and trust values—and asynchronous communication (Section 2.2). This approach introduces greater richness and expressiveness to opinion dynamics models, making the model-based analysis of opinion propagation more reflective of real-world scenarios. Furthermore, we develop comprehensive datasets with a wide range of Watts-Strogatz (or small-world) network topologies (Section 4.2), varying degrees of initial infected nodes, action budgets, and reward models—from those requiring local network information to those utilizing global real-time network states (Section 3.2.1). The small-world network topology subsumes some of the common social and community networks as shown in various studies [1]. As a result, experiments on four of the real-world network models (Rebuttal PDF - Table 2) (directed and undirected), previously studied in the literature, align with the major findings reported in our manuscript (Section 5 - Research Questions). This makes our work a significant and non-trivial contribution to the existing literature on this topic. From an algorithmic perspective, we extend the static ranking algorithm based on the classical Breadth First Search to the problem of centralized planning in dynamic opinion networks. Supporting supervised learning-based planners, this algorithm can help planners when the network size is small and the global network information is available. To understand planning strategies for large-networks, we also investigated reinforcement-learning-based centralized planners. In particular, our study explores five distinct reward structures, enhancing our understanding of reward dynamics and their computability (based on access to local vs global network information) that may be applicable to diverse applications depending on network observability and size. Further, the use of Deep Value Networks (DVN) proves more appropriate for our chosen application compared to the traditional and popular architecture of Deep Q-Networks (DQN), which has been investigated to study planning problems for opinion propagation models with simple binary state and trust parameters. DVN-based planners studied in our work are better suited for environments where the number of intervention nodes (action budget) is not fixed and varies dynamically. This flexibility allows DVN to handle dynamic and large-scale networks with greater efficiency as shown in our work [see citation 14 from the manuscript]. [1]. Watts, D.J. and Strogatz, S.H., 1998. Collective dynamics of ‘small-world’networks. nature, 393(6684), pp.440-442. Pdf: /pdf/ba26ee7554e2114a783a1f8c97188d5be27cdb2a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
STL: Still Tricky Logic (for System Validation, Even When Showing Your Work)
Accept (poster)
Summary: The paper investigates the efficacy of formal specifications (specifically, Signal Temporal Logic (STL)) for human validation of autonomous systems. The authors distinguish between *verification* (whether an implemented policy adheres to a formal specification of its behaviors) and *validation* (whether the system's specifications align with higher-level goals). They correctly assert that validation is an inherently human-centered and subjective process. While many existing explainable AI methods increase the system's explainability, humans still struggle to validate robot behaviors specified through formal methods. The main question addressed in the paper is *whether active learning methods can improve human performance in validating formal specifications.* The paper conducts a comprehensive user study (n=55), dividing participants into three groups: no active learning (control), active learning with no live feedback (AL-NF), and active learning with live feedback (AL-WF). The main conclusions can be summarized as follows: - There is no significant improvement in validation accuracy with active learning. - Active learning increases user engagement, and active learning with feedback reduces user frustration. - Active learning appears to have a greater impact on participants with lower STEM experience (sample size was however too small for definitive conclusions.) Strengths: 1. The paper correctly states that most human users are not capable of validating system behaviors using formal temporal specifications. It raises an interesting question about whether active learning methods can improve this capability. The experimental results are valuable as they show no significant improvement over the control group with active learning, which is surprising and definitely calls for further research at the intersection of human learning theories and formal system specification methodologies to bridge this gap. Weaknesses: 1. The main weakness of the paper is the mismatch between the claims made in the introduction and the focus of the experiments. The introduction argues that formal specifications are not suitable for explaining whether a system achieves higher-level, human-centered goals. However, the experiments measure the capacity of human users to infer the achievability of formally specified goals, specifically whether an agent can safely reach a goal in 30 steps. This task can be automatically checked using model checking techniques, such as providing the STL specification to a model checker to verify goal satisfaction and produce counter-examples if not satisfied. Although the authors mention this misalignment in the limitations section and correctly state that it is unavoidable to keep the success criteria objective and machine-checkable, the significant mismatch makes it difficult to draw conclusions regarding the efficacy of formal specifications in validating human-centered, higher-level goals. A short discussion of this mismatch in the introduction or later sections would have been beneficial. 2. The introduction and related works sections do not discuss the state-of-the-art techniques for enhancing the validation process of formal temporal specifications. There is already work on improving the interpretability of temporal logic specifications, such as translation to natural language [1] and hierarchical specifications [2]. [1] Cherukuri, H., Ferrari, A., Spoletini, P. (2022). Towards Explainable Formal Methods: From LTL to Natural Language with Neural Machine Translation. In: Gervasi, V., Vogelsang, A. (eds) Requirements Engineering: Foundation for Software Quality. REFSQ 2022. Lecture Notes in Computer Science, vol 13216. Springer, Cham. https://doi.org/10.1007/978-3-030-98464-9_7 [2] X. Luo and C. Liu, “Simultaneous task allocation and planning for multi-robots under hierarchical temporal logic specifications,” arXiv preprint arXiv:2401.04003, 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: - Line198: Why the guessed answers were scored as incorrect? This seems to ignore potential role of feedbacks in increasing the chances of making a correct guess. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: [As I mentioned in the Weaknesses section] The main limitation of the paper is that the experiments define system success as "whether an agent can reach a goal in 30 steps", which is a specific and objective task that can be verified through formal methods like model checking. In contrast, the introduction suggests an interest in higher-level, subjective human-centered goals, which are not directly addressed by the experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: High Level Objectives and Experiment Setup (Weakness 1 and Limitation) We appreciate the reviewer's observations regarding the perceived incongruity between the high-level objectives delineated in our introduction and the specific experimental task employed in our study. Upon reflection, we recognize that there is an opportunity to further elucidate the distinction between verification and validation processes, and better explicate how our experimental paradigm aligns with real-world workflows and the broader implications of validation procedures. The experimental task, which involves determining whether a given specification will result exclusively in trajectories allowing the agent to safely reach a goal within 30 steps, can be conceptualized and solved as a verification problem. However, we deliberately framed it as a validation problem, wherein the "game rules" of reaching the goal within 30 steps and avoiding the opponent serve as proxies for higher-level, human objectives. This methodological decision was made to facilitate an objective assessment of our hypotheses. We acknowledge in the present paper that the utilization of these rules as surrogates for more complex, human-centered goals renders this experimental task equivalent to a verification task which could be solved with verification techniques such as model checking (lines 151-158). Nevertheless, our framing as a validation problem within the experimental setup allows us to explore the nuanced differences between verification and validation processes, particularly in contexts where system goals may be less explicitly defined or more subject to interpretation. This reframing enables us to investigate the cognitive processes and decision-making strategies employed by participants when confronted with a task that, while structurally similar to a verification problem, requires a more holistic evaluation of system behavior against broader objectives. As we note in line 158, the "validation" task we present in this study is simple enough that failure to perform well in this context bodes poorly for when it is to be replaced with a less explicitly-defined validation. This approach aligns with the validation challenges often encountered in real-world scenarios, where the assessment of system performance extends beyond mere compliance with specifications to encompass the fulfillment of overarching goals and stakeholder expectations. This work lays important groundwork for future research in bridging formal methods and human-centered validation processes. Theoretical Interpretability Improvements vs. Empirical Studies (Weakness 2) We appreciate the reviewer providing further background works. Our main focus in this work is to fill in the gap in interpretability studies for STL validation that use real human validators. While Cherukuri et al. provides a mechanism for translation into natural language and evaluation of the translation quality with BLEU, we do not find such techniques to be within our scope on account of the fact that their claims for interpretability are not being tested with human subjects thinking through logical implications, but rather with natural language translations, which are not equivalent. Moreover, language translations of formal logic have not been shown to be an effective mechanism for improving human interpretability. Vinter et al. (1996) and Vinter (1998) showed that specifications rendered in natural language (even when containing logical operators) evoked inappropriate systemic biases in which readers substituted heuristic reasoning (commonly used in language) for logical reasoning (necessary for formal methods) during evaluation. This again highlights the difference between simply translating a specification back and forth between natural and logical languages, and correctly understanding the implications of the specifications (in either form). Neither rendering of the specification guarantees appropriate understanding. The hierarchical structure for logic specifications presented in Luo and Liu offers an intriguing approach to formal method interpretability; however, the lack of empirical evaluation with human subjects limits our ability to assess its practical efficacy within the context of this work. Moreover, while this work effectively reduces the length of individual formulas, its applicability may be limited in contexts such as ours where formulas are already concise with at most total symbol count under 100 and at maximum 7 temporal clauses. We appreciate the reviewer pointing us to recent work on improving the interpretability of temporal logic specifications, but must note that unlike the references we present in lines 38-41, alongside the Vinter work, the reviewer's references involve no human evaluators --- a requirement for making strong claims about human-interpretability. The Cherukuri and Luo works may very well be helpful in improving interpretability, but human evidence of their efficacy is yet to be seen. Question on scoring guesses (Question 1) The opportunity to guess was only provided to subjects after they choose to "give up" and told that their response would be recorded as incorrect. This opportunity was provided following multiple failures to provide trajectories that met the specification as described in lines 196 and 197 of the manuscript. Limitation (See response to High Level Objectives and Experiment Setup.) --- Rebuttal Comment 1.1: Title: Thanks for your response. Comment: Thank you for your response. All of my questions have been answered, and I don't have any further inquiries. I am maintaining a score of 5. My main concerns remain the limited scope of contributions and the paper's suitability for NeurIPS. --- Reply to Comment 1.1.1: Title: Re: suitability for NeurIPS Comment: Thank you for the comment. Since the suitability for NeurIPS is a new concern that was not brought up in the initial review, we would like to address that issue specifically. As we responded to reviewer 14rd when the same suitability concern was stated: "...our work tackles an important topic that is ignored by many studies in logic: validation of behavior via interpretable specifications. The implications of our work relate broadly to logic and XAI, and future work in this area will improve human-AI/human-robot interaction with logic-based systems. NeurIPS specifically has had a variety of work that employs formal logic, such as the following: Differentiable Learning of Logical Rules for Knowledge Base Reasoning by Yang et al. (NeurIPS 2017) Logical Neural Networks by Riegel et al. (NeurIPS 2020) Interpretable and Explainable Logical Policies via Neurally Guided Symbolic Abstraction by Delfosse et al. (NeurIPS 2023) As these works (among others) make interpretability claims about logic-based systems (and were accepted to NeurIPS), our work in empirically checks these claims, is essential. Quoting Professor Michael Carbin from the Charles Isbell NeurIPS 2020 keynote, "The issue is not just correctness, but understanding the problem across the entire pipeline to understand what correctness is," which strongly mirrors our quotation of Professor Nancy Leveson (lines 39-41)."
Summary: This paper studies the claim of Signal Temporal Logic (STL) specifications being human interpretable and provides results from an experiment with human participants studying a potential active learning technique to improve explainability metrics. Results show that while human engagement is improved, system validation score changes are negligible. Strengths: The paper is written well and tackles an important topic that is ignored by many papers in STL and other temporal logics which claim human interpretability. Weaknesses: - The reviewer feels that this study, while important, may be a better fit for a more specialized venue. - The paper provides a study on the (mostly) unhelpful effects of active learning on the performance of the subjects (Table 1) but does not clearly indicate the usefulness of this knowledge or whether it significantly contributes to the community beyond what is already presented in [15, 33]. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. While the claim that STL is directly human interpretable can be brought into question, does it provide a useful middle ground towards interpretability for task descriptions? Could an STL specification be fed into a Large Language Model (LLM) tool to yield a description more favored by human participants? To this reviewer, intuitively, a temporal logic specification is often more readable than any general neural network policy (viz. model parameters) or say a few “desired” trajectory samples. 2. The example specification in Fig. 1 has several different time intervals considered (4 to 12, 15 to 30, etc). This may hinder “human interpretability” significantly as the user may need to parse the trajectories multiple times. Have different measures of difficulty been considered in the study such as a common set of time periods among the specifications or reduced number of AND clauses? A description of these difficulty classes may yield answers towards what parts of STL are difficult for the users. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: - Primarily stated as a “negative result” paper, while showing that active learning may not help STL interpretability scores, a solution is not discussed. - STL specification difficulty classes were not defined or considered. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Fit for NeurIPS (Weakness 1) As the reviewer notes, our work tackles an important topic that is ignored by many studies in logic: validation of behavior via interpretable specifications. The implications of our work relate broadly to logic and XAI, and future work in this area will improve human-AI/human-robot interaction with logic-based systems. NeurIPS specifically has had a variety of work that employs formal logic, such as the following: - Differentiable Learning of Logical Rules for Knowledge Base Reasoning by Yang et al. (NeurIPS 2017) - Logical Neural Networks by Riegel et al. (NeurIPS 2020) - Interpretable and Explainable Logical Policies via Neurally Guided Symbolic Abstraction by Delfosse et al. (NeurIPS 2023) As these works (among others) make interpretability claims about logic-based systems (and were accepted to NeurIPS), our work in empirically checks these claims, is essential. Quoting Professor Michael Carbin from the Charles Isbell NeurIPS 2020 keynote, "The issue is not just correctness, but understanding the problem across the entire pipeline to understand what correctness is," which strongly mirrors our quotation of Professor Nancy Leveson (lines 39-41). Contributions Beyond Greenman et al and Siu et al (Weakness 2) Greenman et al. is a study performed in a school setting with people trained in formal methods, seeking to understand misconceptions in LTL. While formal methods experts are important stakeholders in the validation process, they are not the only ones who need to understand the implications of specifications. Further, that work is concerned with general misconceptions, rather than the case of system validation. Siu et al. involved subjects with varying levels of familiarity with STL, but focused on methods that the formal methods and AI community believed were interpretable --- raw formal logic, decision trees, and natural language. That contrasts with the present study as we focus on methods demonstrated by the educational community (lines 104-119), heeding Miller's argument that the XAI community ought to draw from the work of experts in human learning in building our methods. Like Siu et al, our negative result calls into question the frequent claims being made about formal methods interpretability without evidence, and highlights the need to providing empirical evidence to support these claims, but we do so from the perspective that Miller takes. Translation into Natural Language (Question 1) The idea that a language translation might improve human interpretability seems intuitive, as the reviewer notes. However, much like the "intuitive" interpretability of other XAI methods that were not tested with users, close examination of the user study literature showed results to the contrary. We did not explore language translation because earlier findings with user studies (Vinter et al., 1996, Vinter 1998) and not simply intuition, showed that specifications rendered in natural language evoked inappropriate systemic biases where readers substituted heuristic reasoning (commonly used in language) for logical reasoning (necessary for formal methods). Complexity of Time Periods (Question 2) We agree that varying time intervals and multiple clauses may impact interpretability. We did not consider this kind of complexity, but note that in practice, creating any kind of autonomous system that only uses a single time interval or a minimal number of AND clauses severely limits the range of potential systems under consideration. Negative Result without a Demonstrated Solution (Limitation 1) Many papers in temporal logic assume human interpretability without empirical human evidence. While this work is a negative result, future work includes expanding to a multi-session setup in order to better resemble the structure of active learning pedagogy in classroom environments and perhaps also a focus on compliance based professions (lawyers, compliance officers etc.) who may have better training at identifying edge case failures. Moreover, future work should investigate how other machine-based solutions, such as automatically highlighting edge cases, could aid users in developing an accurate mental model of the limits of a specification. We refrain from endorsing a specific solution at this stage, emphasizing that thorough human subject studies are crucial to substantiate any claims about improving interpretability. Pushback against overreach of claims is a natural and appropriate part of the scientific discourse, as argued by in the XAI case by Miller et al, (2017) [28], particularly when others are not checking the claims as we do. STL Difficulty Classes (Limitation 2) We defined and considered specification complexity by the number of symbols (43 to 97) and the abstract syntax tree depth (3 to 5) (lines 182-184). The data did not support that these factors affected performance (line 242). We could have codified difficulty into classes, though how to classify these is unclear, and we are already using the field's standard measures of complexity. For context, the example in Figure 1 has 46 symbols and an AST depth of 3, an example that is on the simple end of our complexity spectrum. A full set of our specifications and maps is in the supplementary PDF. As noted by the Vinter studies, individuals have difficulty working with formal specifications in different ways and have a variety of preferences for specifications' verbosity. In our post-experiment commentary, 4 subjects noted trouble with operator nesting. The question with the lowest performance (32% correct) had a negation operator. Yet, other questions that included negation operators --- with equivalent or lower length and AST depth --- had typical correctness rates. This underscores the nuanced differences in how participants engage with formal specifications and the complexities associated with various specification constructions. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and their rebuttal PDF explaining the specifications considered. Based on reading the other reviews as well, I am optimistic that the community will be interested in the results presented. However, I am still uncertain whether the presented study, in its current form, will be useful to XAI researchers without further in-depth exploration of what makes certain parts of STL challenging for human evaluators. Since the presented work is a step in the right direction, I will raise my score accordingly. Nevertheless, I remain skeptical about how useful the work is compared to Siu et al. [33], given that STL was a subset of the logic descriptions they considered.
Summary: The paper presents the results of a human study exploring the intuitiveness and interpretability of formal logic—in this case, signal temporal logic (STL)—in expressing policies for autonomous systems. Specifically, the authors study the effect of active learning, a pedagogical approach for human learning, on understanding the semantics of STL. The experimental results show that active learning, with or without feedback, provides no significant improvement, and STL remains a challenging logic for humans to understand. Strengths: + The problem of understanding the role of formal logic in interpretable control is well motivated. + The experimental setup, ManeuverGame, and the corresponding logic STL seem appropriate for the study. + The user study appears to be well designed, with appropriate IRB approval. + The results are surprising, especially regarding moderate changes in the results with respect to the formal methods/STEM background of the participants. Weaknesses: - In the absence of a list of exact formal specifications used in the study, it is difficult to understand why participants found it challenging to validate the specifications. - It is unclear whether the exercise tested participants' understanding of the domain or the logic. - The exact challenges in understanding STL are not clearly discussed. Technical Quality: 3 Clarity: 3 Questions for Authors: - Would similar results be observed if natural language were used to express specifications? For instance, the specifications in Figure 1 do not have any STL-specific features that would complicate understanding. - Is it possible that participants' performance resulted from a lack of familiarity with the experimental setup rather than with the logic? - Which aspects of the logic do participants find challenging: modalities, time-tracking, or nesting of the operators? - How did the authors choose the class of specifications? Are they known to be particularly challenging, or do they express some practically relevant requirement? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper presents a human study to better understand the role of STL in interpreting robot policies. The study involved three groups of participants, differing in the active learning received. The results show no major differences in performance among these groups. While the user study is clearly of interest, it is not clear if natural language would have been a better candidate for the same task. It appears that the key challenge lies in understanding a conjunction of requirements and their effects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: List of Specifications Used (Weakness 1) We have included a full list of specifications and corresponding maps in the rebuttal's attached PDF. To clarify the nature of these specifications in the context of STL, we offer the following: All specifications were expressed as location constraints. The basic structure of these constraints involved spatial variables X and Y, representing grid squares in 2D gridworld. The relational operators $<, >, \leq, \geq,$ and $=$ were employed to define spatial relationships. These atomic propositions were then combined using the following logical and temporal operators propositional logic operators: AND (Conjunction), OR (Disjunction), NOT (Negation) and temporal operators ALWAYS and EVENTUALLY. All specifications were formulated as absolute positions rather than relational ones, which was intended to make the task nontrivial (otherwise, statements such as "Eventually distance from goal = 0 AND Globally distance from hostile > 2" could be used). While these specifications were intended to be relatively simple, the low rates of validation observed suggest that more intricate real-world systems, would likely encounter even more significant challenges. In examining the specific difficulties faced by subjects, a variety of failure causes were identified. Negation was highlighted as a significant challenge during the think-aloud portion of the experiment. Nesting was highlighted by 4 users in their post-experiment commentary as a challenge. Among the four specifications with the lowest performance (each with accuracy <50 percent), the main challenges presented by these specifications included time constraint alignment, operator coordination, negation handling, and nesting complexity. Subjects' Domain Knowledge vs STL Knowledge (Weakness 2 and Question 2) We acknowledge the concern about the experimental setup testing participant's understanding of the experiment domain or the STL logic. The experiment utilized a gridworld representing a capture-the-flag scenario, chosen for its familiarity to a wide range of users. To address potential gaps in understanding, we included a comprehensive introductory section that thoroughly explained the game domain and its dynamics. This section typically required at least 20 minutes for most users to complete and featured a combination of text and animated explanations. This introduction not only provided text and animation explanations, but also interactive quizzes to ensure users correctly understood the game rules before they began gameplay. These quizzes evaluated users' understanding of movement through the gridword, all win and loss conditions, use of STL logic within the capture the game scenario, as well as use of the interface. All questions had to be correctly answered before subjects were able to progress through to the experiment task. Static versions of introductory material and quizzes are provided in the original supplementary material. Challenges in Subjects' Understanding (Weakness 3 and Question 3) With regards to exact challenges in subjects' understanding of STL, our experimental format, which evaluated participants on only 10 specifications in the course of a single session, did not allow for in-depth analysis of the specific challenges they encountered in understanding the specifications. The work of Vinter et al. (1996) and Greenman et al. [15] which address the cognitive challenges people encounter in working with formal logic included much longer periods of evaluation such as over the course of multiple academic years. We appreciate the reviewer's observation that "the key challenge lies in understanding a conjunction of requirements and their effects." It appears that many subjects struggled with evaluating the STL formulas due to misinterpretations of how different clauses, involving time and operator types, interact with one another. Future work expanding to a multi-session setup would allow for better understanding of specific challenges. Translation into Natural Language (Question 1) We acknowledge the reviewer's perspective regarding the potential benefits of translating formal methods into natural language for enhanced human interpretability. This viewpoint resonates with the initial intuition shared by the authors and many others in the field. However, we opted not to pursue language translation based on the findings from Vinter et al. (1996) and Vinter (1998). Their research revealed that specifications articulated in natural language—even those incorporating logical operators—can provoke systemic biases. Readers often revert to heuristic reasoning, which is common in everyday language use, rather than engaging in the logical reasoning that formal methods require during evaluations. Consequently, we believe that if natural language were employed to express specifications, subjects would likely make errors with similar or even greater frequency, albeit for slightly different reasons. The reliance on heuristic reasoning could lead to misinterpretations, undermining the very clarity and precision that natural language is intended to provide. Familiarity with Domain (Question 2) (See Subjects' Domain Knowledge vs STL Knowledge, above) Challenging Aspects of Logic (Question 3) (See Challenges in Subjects' Understanding, above) Choice of the Class of Specifications (Question 4) We chose the class of specifications and the scenario because gridworld movement (in our small environment) was something that we felt subjects could easily grasp. This assumption was validated by our pilot studies as well as by subjects' ability to answer questions in the introductory period that checked for their domain understanding. The simplicity of the validation task was a point of comparison against more complex and ambiguous validation tasks, which would be more difficult to perform. Poor performance here would likely indicate an inability to validate more real-world tasks (lines 155-158). --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses to my questions. I am satisfied with the answers provided.
Summary: This paper examines the challenges of using Signal Temporal Logic (STL) for validating autonomous systems and finds that human validation accuracy remains loweven with active learning techniques. Using the ManeuverGame interface, the study tests three conditions—no active learning, active learning, and active learning with feedback—finding no significant improvement in validation performance. The research highlights the need for better validation techniques and human-computer interaction to enhance human interpretability and validation of STL-specified policies. Strengths: 1. The study employs a well-structured experimental design using the ManeuverGame interface, which effectively simulates real-world validation scenarios. This allows for a thorough assessment of human validation performance across different conditions 2. The research provides valuable empirical evidence showing that active learning does not significantly enhance human validation accuracy, remaining around 65\%. This insight is crucial for understanding the limitations of current formal methods in practical applications. 3. The study highlights the cognitive challenges faced by humans in interpreting and validating STL-specified policies, pointing out areas for improvement in human-computer interaction. Weaknesses: 1. In my understanding of formal method, if you want to verify a rather high-level specification (which I assume is what the authors mean by 'validate'): You either directly use a formal method enhanced synthesis for your policy (which is easily doable for their maneuver case study) or use model checking (which depends on what policy we care about), and no human supervision should be necessary. Could the author justify that? 2. I think the criticism against interpretability is unfounded: They are not showing how they are interpreting the formulas for the users (which they can certainly do better with existing online monitors) and I think this is unfair for people who actually spend a lot of time developing monitors that are interpretable (such as the robust online monitors). Plus, few papers on monitoring are cited, which makes the paper seem to lack of comparision. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness part: 1. Justify why we need human supervision. 2. Explain the criticism against interpretability. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This is about user-study which has no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Human Supervision (Weakness 1, Question 1) We appreciate the reviewer's comment regarding the lack of necessity for human supervision in our experiment and acknowledge that model checking could be used given the the concreteness of our objectives. As described in lines 155-158 of the manuscript, we wished to simulate real-world scenarios where human judgment is indispensable, particularly with stakeholder intents that are not easily codified into formal specifications. In the text, we explicitly acknowledge the reviewer's concern, but note that the task is a stand-in for the aforementioned human judgement scenarios. Unfortunately, without presenting to the user a task that can be checked automatically, we cannot objectively check the user's ability to evaluate the specification. Participants' failure to correctly understand the specifications in the context of the task is indicative of the difficulty they would encounter in more ambiguous and/or complex cases (line 158). While methods like model checking and synthesis are indeed powerful, they fall short in capturing complex, context-dependent intents for autonomous systems, such as "navigate safely" or "maintain user trust". Where verification asks the question "does this product/behavior match with the set of requirements set out for it"... validation probes "does this product operate in and only in the ways that I want it to?" In our experiment, "winning" serves as an objective stakeholder intent that can be codified in programming. This stands in for the more complex and less easily codified intents that robotics stakeholders often have and allows us to explore the limitations and challenges of human validation in a controlled environment, providing valuable insights that are applicable to more complex, real-world tasks. For example, in an autonomous driving scenario it can be verified that a car is able to obey posted speed limits and this behavior can be verified in real time. However, validating that a vehicle is operating safely, obeying the flow of traffic, and properly responding to other drivers' intentions to merge are behaviors critical to its safe deployment that cannot so easily or directly be quantitatively checked. Critique of Interpretability (Weakness 2, Question 2) We acknowledge the point regarding the our critique of interpretability claims and recognize the important work of researchers developing robust online monitors, such as Deshmukh et al. (2017), Zhang et al. (2023). However, upon review of such work, we still do not find empirical evidence that users performing interpretation are able to do so well using these systems. Such evidence cannot simply be provided in a software-only context without human studies. If the reviewer can point us to human studies with users interpreting formal specifications that demonstrate system usability, we would be happy to incorporate them into our discussion. Our findings indicate a gap in testing these systems with human participants to support interpretability claims. While many works have these claims (e.g. line 89), few support them with evidence from humans studies. This is not to say that existing efforts do not present interpretable techniques, but rather that there is a pressing need to validate these claims of interpretability with human operators, as argued by Miller et al. (2017) [28]. Moreover, our study included a monitor in the third condition (active learning with feedback) where the robustness of users' trajectories was checked when users marked them as complete. Trajectories with negative robustness could not be saved and users were alerted that their submission was invalid and prompted to retry. However, our subject pool did not show significant improvement in validation performance even with this support. We wish to emphasize the distinction between asserting a system's interpretability based on theoretical design principles and demonstrating it through empirical testing with human users. While theoretical claims rely on design principles to suggest interpretability, it is the empirical evaluation that provides concrete evidence of whether users can effectively understand and use the system. Even systems that appear to be interpretable at first glance, such as decision trees or translation into natural language, may not be actually interpretable, as shown in [33] as well as in Vinter et al. (1996) and Loomes and Vinter (1997), particularly for the difficult task of system validation, which requires understanding all potential edge cases. The evidence we found and cited (lines 38-41, 92-98), along with Vinter et al. (1996) and Loomes and Vinter (1997) which actually involved users, point to a preponderance of evidence that methods claimed to be interpretable in the academic literature are often not so when users interact with them. If the reviewer is able to point us to user studies that demonstrate the interpretability of monitors (or other formal methods tools) that are designed to be human-interpretable, we would happily consider them in the context of this experiment. --- Rebuttal Comment 1.1: Comment: Thank the author for answering my questions, my concern has been solved and. I have updated my score accordingly.
Rebuttal 1: Rebuttal: We thank the reviewers for their critical assessment of our work. As a note, we appreciate one of the reviewers pointing out that we did not include the example formulas and maps show to subjects. We have included this content in the rebuttal's attached PDF and will also include it in the final manuscript. Contribution to XAI This work builds upon Miller et al's (2017) work which advocates for the critical need to ground explainability work in cognitive science and human information processing [28]. Our study distinguishes itself not only within the formal methods community but also in the broader XAI field by leveraging well-established educational practices and theories. This approach offers a novel perspective on explainability, bridging the gap between AI systems and human comprehension. Many interpretability studies claim human interpretability without providing empirical evidence, our work evaluates claims of human understanding with real humans. Despite producing a negative result, our study underscores the importance of empirical research involving human subjects. This outcome reinforces the understanding that intuitive solutions in XAI may not always align with actual human comprehension. Why not natural language? Although natural language is often perceived as more intuitive for human understanding, it also presents its own set of challenges and biases. For this, we point to the work of Vinter et al. (1996), Loomes and Vinter (1997), and Vinter (1998) which evaluated the use of natural language translations of formal logic. We will address this oversight and incorporate a discussion of their findings into the final manuscript. Ironically, the time in which Vinter performed his experiments was one in which natural language was the standard approach to specifications, and the general push in computer science was to move towards the use of formal specifications for greater human understanding. We refer to the following works in our responses: Loomes, Martin, and Rick Vinter. "Formal methods: No cure for faulty reasoning." Safer Systems: Proceedings of the Fifth Safety-critical Systems Symposium, Brighton 1997. London: Springer London, 1997. RJ Vinter, MJ Loomes, and D Kornbrot. Seven lesser known myths of formal methods: uncovering the psychology of formal specification. 1996. Vinter, Ricky Jay. "Evaluating formal specifications: a cognitive approach." (1998). Jyotirmoy V. Deshmukh, Alexandre Donzé, Shromona Ghosh, Xiaoqing Jin, Garvit Juniwal, and Sanjit A. Seshia. Robust online monitoring of signal temporal logic. Form. Methods Syst. Des., 51(1):5–30, aug 2017 Zhenya Zhang, Jie An, Paolo Arcaini, and Ichiro Hasuo. Online Causation Monitoring of Signal Temporal Logic. In Constantin Enea and Akash Lal, editors, Computer Aided Verification, pages 62–84, Cham, 2023. Springer Nature Switzerland. Pdf: /pdf/617119433fcf471af319ad0240cfc978d199015a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems
Accept (poster)
Summary: This paper introduces a new approach to model the low-dimensional latent dynamics of a collection of neurons over time. Their approach balances the two desiderate of capturing complex nonlinear dynamics while remaining interpretable. Specifically, they introduce the Gaussian Process Switching Linear Dynamical System, which models the latent state as a Gaussian process with a specific novel kernel function, which interpolates between different linear dynamics akin to a switching linear dynamical system. This mitigates some of the pathologies of rSLDS and also provides uncertainty estimates for the dynamics. The authors validate their method both on synthetic and experimental data from neural recordings. Strengths: This paper tackles an important topic in neuroscience and provides interesting methodological advances continuing a line of research on inferring the latent dynamics of neuronal populations. The paper is overall well-written and provides a concise and clear exposition. For example, Figure 2G very clearly illustrates the problems that recurrent switching linear dynamical systems have. The methodology seems sound to me and is motivated clearly in terms of the issues with rSLDS. The experiments are well-executed and illustrative and importantly also use real data from two animal experiments tackling questions about behavior and decision-making. In addition, the authors provide code for their method and for two of the experiments. Weaknesses: The biggest weakness the paper currently has in my opinion is that it is hard to judge what the computational trade-offs are between the different methods. While the authors provide estimates for the overall compute they use in all experiments (see Appendix E), there is no comparison between different methods. I would have liked to see an analysis that compares GP-RBF, rSLDS and gpSLDS not only as a function of the number of trials or number of steps simulated forward, but also in terms of their predictive performance as a function of training time / budget. Otherwise, it is hard to judge whether these methods were compared fairly. One aspect that would make the paper stronger, but is not strictly necessary in my opinion, is to add another baseline, which is different from the two approaches that were combined in gpSLDS, e.g. something like LFADS (Sussillo et al, 2016). - Sussillo, D., Jozefowicz, R., Abbott, L., & Pandarinath, C. (2016). LFADS - Latent Factor Analysis via Dynamical Systems. In Advances in Neural Information Processing Systems (NeurIPS) Technical Quality: 4 Clarity: 4 Questions for Authors: - Figure 3: GP with RBF kernel seems to perform better with respect to forward simulation, why do you think that is? - Could you comment on how expensive the different approaches are relative to each other in terms of training and prediction? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: While the paper discusses limitations of their method throughout the paper, this is a bit sparse. I think the paper could be improved by a dedicated limitations paragraph in the discussion in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our submission and for noting the strengths of our work! We are especially pleased that the reviewer praised our work's clear motivation, solid experimental results, and relevance to the neuroscience community. ## Weaknesses **Re. Runtime comparison** Thank you for this suggestion – we refer the reviewer to General Response A for new experimental results and a discussion about comparing runtimes and accuracy between the three methods. To further address your specific concerns: > predictive performance as a function of training time / budget Instead of comparing the predictive performance as a function of training time, we decided to compare the accuracy of the recovered latent states and dynamics and corresponding runtimes for a fixed number of vEM iterations sufficient for convergence. We did this for the following reasons: 1. Measuring how well the model recovers ground truth latent states and dynamics is the most direct way of evaluating the model’s performance on synthetic data, as we are primarily interested in how well the model captures the underlying latent variables. 2. For each model class, our aim is to fit the best possible model (in terms of ground truth recovery) using our available computing resources. Therefore, we first fit all models to convergence and then compared their runtimes and performance. We believe that this approach best reflects how these models are applied in practice. While investigating the learning dynamics as a function of fitting time is certainly an interesting question, it is somewhat orthogonal to the primary scientific goal of understanding latent neural dynamics. > hard to judge whether these methods were compared fairly Since we fit methods using different discretization steps to ensure the numerical stability in each model class, we report total runtime as well as runtime normalized by number of time bins. This allows us to fairly compare runtimes, which we expect to scale linearly with time bins in all 3 models. Finally, we note that training time depends on many factors specific to each individual implementation, and this should be taken into account when comparing runtimes. In General Response A & the attached PDF, we show that our implementation of the gpSLDS and other GP-SDE models – which leverages fast and efficient parallelization and auto differentiation in JAX – is more efficient per time bin than the most widely-used rSLDS implementation. We believe that this is a valuable contribution to the machine learning and neuroscience communities in itself, and we plan to release a codebase to the public along with the paper. **Re. LFADS** Thank you for the suggestion to compare our method with LFADS. We agree it would be interesting to explore this comparison in future work. On a high level, we expect LFADS to reconstruct neural data well, especially given sufficient data. However, the RNN dynamics of LFADS can be difficult to interpret after fitting and do not come with uncertainty estimates. Moreover, because LFADS is a deep learning model with many more parameters than the gpSLDS, we anticipate that it would struggle more to learn accurate dynamics in data-scarce settings, such as the hypothalamic dataset we analyze in our paper. In contrast, models with structured probabilistic priors, such as the gpSLDS, are better suited to correctly infer key dynamical motifs in these settings. ## Questions **Re. Comparison to GP-SDE w/ RBF kernel** > Figure 3: GP with RBF kernel seems to perform better with respect to forward simulation, why do you think that is? This is likely because the GP-SDE with RBF kernel is more expressive than the gpSLDS kernel, but it comes at the cost of being less interpretable. The RBF kernel is universal in the sense that it can approximate any smooth function with arbitrarily small error [Micchelli and Xu 2006]. However, because of its flexibility, key dynamical features such as fixed points are not readily available for downstream analysis. On the other hand, the gpSLDS finds the best (smoothly-interpolating) piecewise-linear approximation to a nonlinear system. This aids in interpretability by showing how nonlinear dynamics can be partitioned into linear regimes, each of which can be individually analyzed. Therefore, we expect that the gpSLDS finds dynamics which are less accurate but more representative of dynamical motifs of interest. **Re. Computational complexity and tradeoffs** > Could you comment on how expensive the different approaches are relative to each other in terms of training and prediction? We provide a discussion on the computational complexity of the gpSLDS in General Response A. All 3 methods use inference algorithms that scale linearly in sequence length. With respect to latent dimension, the gpSLDS scales exponentially (due to using quadrature) while the rSLDS scales cubically. However, the exponential scaling in the gpSLDS could be overcome by using Monte Carlo approximations instead of quadrature for large latent dimensionalities. In addition, all methods can predict dynamics at new locations efficiently once the models are fit. For the GP-SDE based models, predicting dynamics at a batch of $B$ new locations costs $O(KBM^2)$, where $M$ is the number of sparse inducing points. For the rSLDS, this operation costs $O(KB)$ by reading off the dynamics of the most likely discrete state. In practice, the main cost comes from fitting the model rather than from predicting dynamics. We will include a discussion about relative computational tradeoffs to our paper. **Re. Addressing limitations** Thanks for this suggestion – we have written a discussion on limitations and possible model extensions in General Response B, which we plan to add as its own paragraph in the paper. **Thank you again for your positive comments and insightful response!** If you have any further questions, we would be happy to answer them. – The gpSLDS authors --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions on computational cost and adding an explicit limitations section. I believe the changes in the rebuttal strengthen the paper and have raised my confidence in my earlier assessment that this paper should be accepted. I've increased my confidence score to reflect this.
Summary: This paper proposes a new model called Gaussian Process Switching Linear Dynamical System (gpSLDS). The model is more interpretable and infers more stable latent compared with the alternative rSLDS. Particularly, the weird oscillations of the latent can be avoided due to the newly proposed Smoothly Switching Linear (SSL) kernel. Extensive experimental results on one synthetic and two real-world datasets shows the benefit of gpSLDS compared with the one with the traditional RBF kernel and the rSLDS. Strengths: * This proposed method is detailed in math and intuitive with some instructive explanations. * One synthetic and two real-world experiments are done, which is good. Weaknesses: * Some of the presentations can be improved. * Lacks some complexity analysis or results. This might be an important factor to be considered for long sequences. See questions. Technical Quality: 2 Clarity: 3 Questions for Authors: * Typo: Line 129, function's slope of * Figure 1A, if different colors are different samples (should be clearly state if so), then it is meaningless to show these samples unless kernel parameters are told. * Line 177, $\boldsymbol z_m \in \mathbb{R}^D$? And it seems like this $\boldsymbol z$ is different from the $z_j$. * What does the conjugacy of the model mean in line 193? * How about the effect of choosing $K$ and $J$. At least some explanations or guidance and the corresponding effects are necessary for them. * How about the algorithm complexity. Computing GP is time consuming, which might hinder the computing efficiency for long sequences. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: / Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our submission and for providing helpful feedback! We are especially pleased that the reviewer found our methodology to be intuitive and supported by solid experimental results. ## Weaknesses **Re. Complexity analysis** > Lacks some complexity analysis or results. This might be an important factor to be considered for long sequences. We refer the reviewer to General Response A for an analysis and discussion of the computational complexity of the gpSLDS. We highlight that while typical GP inference methods scale as $O(T^3)$ where $T$ is the number of time steps, we overcome this by employing inducing points as in Titsias (2009) and Duncker et al. (2019) to reduce this complexity to $O(TM^2)$ where $M$ is the number of inducing points. It is true, however, that the gpSLDS incurs larger computational costs in scaling with respect to latent dimension due to using quadrature to approximate kernel expectations. For many real world datasets, such as the ones we explore in our paper, the key dynamical features of interest can be captured in low latent dimensions. That said, there will be some applications which require large latent dimensionalities; for these cases, one possible workaround would be to use Monte Carlo methods instead of quadrature. We also highlight that in practice, our implementation of the gpSLDS leverages fast parallelization and auto-differentiation in JAX, which yields faster runtimes per time bin than the rSLDS (see General Response A & attached PDF). ## Questions **Re. Line 129** Thank you for catching this typo! We will be sure to fix this in our paper. **Re. Figure 1A** In Figure 1A, different colors do indeed represent different samples from the GP. We will make sure to clearly state this in the figure caption. We decided to show multiple samples for each GP kernel to provide intuition for what each distribution over functions looks like and how they build up to the SSL kernel. **Re. Line 177 notation** Thanks for catching this typo as well! Line 177 should say $\mathbf{z}_m \in \mathbb{R}^K$. Thank you also for pointing out the overloaded notation for $z$. We will keep the notation for the inducing inputs and will change the notation for the rSLDS discrete states to $s_j$. **Re. Model conjugacy** Here, model conjugacy refers to the fact that we can compute the distributions $q(\mathbf{u}_k)$ which maximizes the ELBO in closed-form. This is due to the conjugacy between the Gaussian prior on inducing points $p(\mathbf{u}_k \mid \Theta) = \mathcal{N}(\mathbf{u}\_k \mid 0, \mathbf{K}\_{zz})$ and its corresponding Gaussian variational posterior $q(\mathbf{u}\_k) = \mathcal{N}(\mathbf{u}\_k \mid \mathbf{m}\_u^{k}, \mathbf{S}_u^{k})$. We provide more detail about this update step in Appendix B.4, and present the closed-form updates for $\mathbf{m}\_u^{k}$ and $\mathbf{S}_u^{k}$ in Equations (40)-(41) of that section. **Re. Choosing $K$ and $J$** Thanks for this question. In cases where we do not know the true latent dimensionality or number of linear regimes, we can use standard model comparison metrics in the neural latent variable modeling literature to choose these hyperparameters. For example, we could compare models with different $K$ and $J$ based on forward simulation accuracy, which measures how well we can predict neural activity if we sample future latent states from the fitted model [Nassar et al. 2019; Nair et al. 2023]. Another option would be to use a validation technique called co-smoothing, which involves re-running the inference step of a fitted model on held-out trials after withholding some neurons, and then evaluating the expected log-likelihood on those withheld neurons [Macke et al. 2011; Wu et al. 2018; Keeley et al. 2020]. We will add a discussion on choosing $K$ and $J$ to our paper. **Thank you again for your positive comments and insightful response!** If you have any further questions, we would be happy to answer them. – The gpSLDS authors --- Rebuttal Comment 1.1: Comment: Thanks for the response. I don't have further questions and I would like to keep my score.
Summary: The paper explores latent state inference and parameter learning within a switching stochastic dynamical system. In this context, the dynamics are represented by a stochastic differential equation, with the drift function modeled as a Gaussian process. Notably, the paper introduces a novel kernel for this Gaussian process—a mixture of linear functions that captures the system’s switching behavior. The inference process employs a variational inference framework. To validate its effectiveness, the paper conducts evaluations using both synthetic and real neuroscience data. Strengths: The proposed method offers a fresh perspective on inference for switching dynamical systems. It introduces a novel Gaussian process (GP) kernel that captures the switching behavior between linear functions. The experimental results are convincing and fair, and the paper is well-written and easy to follow. Additionally, the authors suggest a minor modification to the variational inference (VI) algorithm to achieve faster updates. Weaknesses: My primary concern with this work lies in how the latent dynamics, denoted as $f$, are modeled as independent draws from the same Gaussian process (GP) prior. In my view, this implies that the posterior over latent components should be independent in each component. Consequently, learning becomes challenging when only single components of the system are observed. This limitation could pose a substantial problem, especially in classical tracking scenarios where only specific state components are observable. Additionally, I perceive the contribution as somewhat marginal. Essentially, the paper introduces a new GP kernel, but beyond that, the impact seems limited. Choice of Linear Kernel: I wonder about the rationale behind using a linear kernel. Why not explore switching between nonlinear kernels? Inference Algorithm Analysis: Lastly, a detailed examination of the variational inference algorithm would provide valuable insights. Additionally, a synthetic example for comparison could help demonstrate the algorithm’s performance. For instance, the authors could select a linear stochastic differential equation (SDE) and perform closed-form parameter estimation using methods like the Expectation Maximization (EM) algorithm. In this scenario, latent state inference could be achieved through RTS smoothing. Technical Quality: 4 Clarity: 3 Questions for Authors: - Could the authors provide further details on how the presented approach could incorporate a multi-dimensional correlated Gaussian process prior? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our submission and for providing insightful feedback. We especially appreciated that the reviewer found our submission to be clearly written and a "fresh perspective" on SLDS models! ## Weaknesses **Re. Prior independence assumption**\ We thank the reviewer for this comment and agree it is important to consider how certain modeling choices might introduce estimation error or limit expressivity. Here, we provide a discussion on this point which we'll also include in the paper. We note that in our setting, we assume high-dimensional observations are driven by mixtures of latent components. Observing a lower-dimensional projection of the SDE where only single components are observed is not a setting we consider here, but would be an interesting extension. We model $f$ using independent GPs, following a body of previous work [Eleftheriadis et al. 2017; Duncker et al. 2019; Fan et al. 2023]. We note that although the prior assumes independence, **this does not imply that the true posterior over $f$ is independent across dimensions**, as the likelihood depends on the latent state $x$ which combines $f$ across dimensions. We approximate the true posterior using a variational approximation that factorizes over latent dimensions. This enables a tractable inference algorithm which propagates posterior uncertainty to the inference and prediction of latent dynamics. While we could add covariance terms to the variational approximation, this would introduce more parameters which may complicate inference and learning. Nonetheless, we agree it is important to carefully assess biases that the variational approximation may induce in the recovered estimates (e.g. Turner and Sahani 2011) and this will be a topic of future work. **Re. Impact of contribution** > the paper introduces a new GP kernel, but beyond that, the impact seems limited While we do introduce a new GP kernel, we believe its broader implications are significant for both the ML and neuroscience communities. We identified key limitations of the rSLDS and drew a nontrivial connection to GP-SDEs via the design of a novel kernel, retaining many of the advantages of the rSLDS while addressing its limitations. The gpSLDS extends a rich line of work on rSLDS and related models which have made a significant impact on interpretable data analysis in neuroscience [Taghia et al. 2018; Costa et al. 2019; Nair et al. 2023; Liu et al. 2023, Vinograd et al. 2024]. We have also developed a fast and efficient JAX codebase implementing the gpSLDS and other GP-SDE models. We will release our codebase to the public upon acceptance, providing practitioners with a valuable tool for modeling neural dynamics. **Re. Why linear kernel?** > I wonder about the rationale behind using a linear kernel. Why not explore switching between nonlinear kernels? We designed the gpSLDS to switch between linear kernels to impose interpretable structure on complex nonlinear dynamics so that they can be easily analyzed downstream. Typical analyses of nonlinear dynamics focus on linearized dynamics around fixed points [Duncker et al. 2019; Sussillo & Barak 2013], and often require second-stage analyses like fixed-point finding [Golub & Sussillo 2018; Smith et al. 2021]. In contrast, interpretable features are readily available in the gpSLDS due to its piecewise-linear structure. In neuroscience, dynamical motifs of linear systems are hypothesized to underlie various kinds of neural computations (e.g. line attractors for evidence integration, rotational dynamics for motor control) [Vyas et al. 2020]. Our goal is to extract these features from neural data in an interpretable way. While it is straightforward to extend our GP kernel to switch between nonlinear kernels instead, this added model flexibility would likely make it difficult to correctly learn regime boundaries. In principle, a sufficiently expressive nonlinear kernel may not need to switch at all. In our case, by allowing linear functions to switch as a learnable function of the latent state, we can capture complex nonlinearities in the dynamics while also adding structure to make parameter learning more feasible. **Re. Validating the variational inference alg.** > a synthetic example for comparison could help demonstrate the algorithm’s performance We note that our experiment in Appendix C shows the improved performance of our modified vEM algorithm over standard vEM on synthetic data. For this experiment, we used the same synthetic dataset as in Figure 2 of the main text (two linear systems separated by a vertical boundary). > The authors could select a linear SDE and perform closed-form parameter estimation... Thanks for this suggestion – while this would indeed allow us to compare our vEM algorithm to closed-form EM, our focus is on performing approximate posterior inference for nonlinear SDEs, for which closed-form updates are not available. Therefore, we performed a slightly different experiment in Appendix C: comparing our modified vEM algorithm to standard vEM on a dataset with simple, but nonlinear, dynamics. We chose this setting because the nonlinearity introduces complex dependencies between the dynamics and kernel hyperparameters that would not be present in the linear case. ## Questions > …incorporate a multi-dimensional correlated Gaussian process prior? As we discuss above, the prior independence assumption does not imply that the true posterior is independent across dimensions. In principle it is possible to incorporate correlation structure into the prior, but this would introduce $O(K^2)$ more model parameters that may be harder to learn. Since the true posterior is still correlated across dimensions even with an independent GP prior, we decided to stick with the simpler approach. **Thank you again for your positive comments and insightful response!** If you have any further questions, we would be happy to answer them. – The gpSLDS authors --- Rebuttal Comment 1.1: Comment: I have read the rebutal and thank the authors for their answers. I do not have any further questions and will keep my score.
Summary: This paper introduces the Gaussian Process Switching Linear Dynamical System (gpSLDS), a novel approach for modeling latent neural dynamics. This model extends the Gaussian process stochastic differential equations framework by incorporating a new kernel function that supports smoothly interpolated locally linear dynamics. This innovation allows the gpSLDS to maintain the expressiveness needed for complex systems while enhancing interpretability, addressing the limitations of the rSLDS. The paper's contributions include the development of the gpSLDS model, introduction of a novel kernel function to balance expressiveness and interpretability, and a new learning algorithm that improves the accuracy of kernel hyperparameter estimation. The model's effectiveness is shown through applications to both synthetic and real neuroscience data, showing superior performance compared to existing methods like the rSLDS. This advancement provides a more robust framework for understanding neural computation and dynamics. Strengths: 1. The gpSLDS introduces a novel kernel function within the Gaussian process framework that uniquely addresses the trade-off between model expressiveness and interpretability in the analysis of neural dynamics. This model contribution is original in the using of switching dynamics within Gaussian process, allowing for smoothly interpolated transitions between local linear regimes. 2. The methodology presented in the paper is well-developed. The paper proposes a GP-SDE model with a well-conceived kernel that facilitates a locally linear interpretation of complex dynamic behaviors. 3. The paper is well-written with a clear structure that guides the reader through the problem motivation, model formulation, and experiments. Weaknesses: 1. The proposed model effectively transforms a linear dynamical system into a Gaussian Process with a well-designed kernel. While this adaptation supports smoothly interpolated locally linear dynamics, it does compromise computational efficiency, shifting from linear time to cubic time cost. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors should explore and discuss the limitations of the gpSLDS, including specific scenarios or conditions under which the model may underperform or fail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our submission and for noting the strengths of our work! We are especially glad to see the reviewer’s comments on the originality and soundness of our method, as well as the clarity of our submission. A large part of the review centers around questions of computational efficiency and limitations of the gpSLDS. We refer the reviewer to the General Response, in which we address both of those themes. In addition, we include more specific responses to your individual review here. ## Weaknesses **Re. Modeling benefits vs. computational tradeoffs** > While this adaptation supports smoothly interpolated locally linear dynamics, it does compromise computational efficiency, shifting from linear time to cubic time cost. We provide a general discussion of the computational scaling of our algorithm in General Response A, showing that our algorithm still scales linearly in sequence length. However, it is true that it incurs larger computational costs in the scaling with respect to the latent state dimensionality. While the gpSLDS does introduce more computational complexity, it brings several key modeling advantages: 1. The gpSLDS can smoothly interpolate between locally linear dynamics. This maintains interpretability by allowing each component to be analyzed downstream using principles of linear systems, while achieving greater expressivity by being able to learn nonlinearities between these linear regimes. Crucially, this resolves problems commonly experienced in the rSLDS, such as artifactual oscillations of dynamics at regime boundaries (Fig. 2G). 2. Our GP-based approach allows us to infer approximate posterior distributions over dynamics at any point in the latent space, whereas the rSLDS often infers uninterpretable dynamics at regime boundaries and does not explicitly treat dynamics parameters as probabilistic quantities. We believe that these modeling advantages of the gpSLDS effectively address limitations in the rSLDS that hinder its interpretability in practice. In addition, our implementation of the gpSLDS and other GP-SDE based models leverages fast parallelization and automatic differentiation in JAX with GPU compatibility. This allows the gpSLDS to achieve more efficient runtimes on a per-timestep basis than the most widely-used rSLDS implementation (see General Response A & attached PDF), despite the additional complexity in GP inference. We will release a codebase with the paper if accepted, which we believe will provide a valuable and practical tool for practitioners. ## Limitations > The authors should explore and discuss the limitations of the gpSLDS, including specific scenarios or conditions under which the model may underperform or fail. We refer the reviewer to General Response B for a discussion on the limitations of the gpSLDS, as well as possible model extensions. **Thank you again for your positive comments and thoughtful response!** If you have any further questions, we would be happy to answer them. – The gpSLDS authors --- Rebuttal Comment 1.1: Comment: Thank you for the response. I have no further questions and will keep my score.
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to read our submission and for providing thoughtful and insightful feedback! We were pleased that the reviewers unanimously supported our submission as a valuable contribution to the NeurIPS community, citing that it was **1) easy to follow, 2) clearly motivated, and 3) backed by convincing experimental results.** Here, we address two main themes that were brought up in several of the reviews. We respond to individual reviewer concerns separately. # (A) Computational complexity of gpSLDS Reviewers rveQ, E7YR, and 3Mtv brought up questions about the computational complexity and efficiency of the gpSLDS. Performing inference and hyperparameter learning in the gpSLDS relies on computing expectations of $f(\cdot)$ with respect to the variational marginals $q(x(t)) \sim \mathcal{N}(m(t), S(t))$ and the approximate posterior GP $q(f)$. GP-based methods are computationally expensive for two reasons: 1) they typically scale cubically in the number of input points to the GP, and 2) evaluating posterior expectations with respect to distributions over GP inputs involve computing expectations of nonlinear kernel functions, which are typically not available in closed form. To overcome 1), we follow Titsias (2009) and Duncker et al. (2019) in using inducing points to perform inference of $q(f)$. This reduces the computational complexity of evaluating $f(\cdot)$ on a sequence of $T$ latent states from $O(KT^3)$ to $O(KTM^2)$ for $M$ inducing points (in our synthetic experiments we choose $M=16$ ($4 \times 4$ grid) for a latent dimensionality of $K=2$). For 2) we perform quadrature with $N$ nodes per latent dimension, so the total number of nodes needed to accurately approximate kernel expectations scales as $O(N^K)$ (for all experiments we use $N=6$). This represents the main computational bottleneck of the algorithm and can pose challenges for fitting the gpSLDS in settings with large latent dimensionality. In the real datasets we study in our paper, a small number of latent dimensions could sufficiently capture key dynamical features of interest, and we expect this to be the case in other applications as well. However, in cases that require larger latent states, it is possible to use Monte Carlo methods instead of quadrature to approximate kernel expectations. We will include a thorough discussion of computational complexity and possible model extensions to our paper. Reviewer 3Mtv suggested doing a runtime comparison between gpSLDS, GP-SDE with RBF kernel, and rSLDS. **In the attached PDF**, we include additional experimental results comparing the runtime and accuracy for the three methods on the synthetic dataset from Fig. 2. For the gpSLDS and GP-SDE with RBF kernel, we choose a discretization step of 1ms, yielding $T = 2500$ time bins. For the rSLDS, we choose a discretization step of 20ms, yielding $T = 125$ time bins. We observed that using too large of a discretization step for the GP-SDE based methods and using too small of a discretization step for the rSLDS can in some cases lead to numerical instabilities, due to their respective continuous-time and discrete-time model formulations. Therefore, we chose these settings in order to fairly compare the three models. We ran each method with $K = 2$ for $100$ variational EM iterations, which allowed all three methods’ ELBOs to converge. To deal with the differing sequence lengths, we report both total runtime and runtime normalized by the sequence length. We report standard errors across 5 runs with different random initializations per model. These results demonstrate a computational tradeoff between the GP-SDE based models and the rSLDS: while the GP-SDE based models require more time bins (via smaller discretization steps) to accurately approximate continuous-time dynamics, they also recover the true latent variables with a much higher degree of accuracy on this continuous-time point process dataset than the rSLDS. In addition, these results show that our own implementations of the gpSLDS and GP-SDE with RBF kernel are more efficient than the most widely-used rSLDS implementation on a per time-bin basis. That being said, we note that differences in runtime are impacted by differences in implementation, such as the discretization step size, GPU compatibility, and choice of optimizer, which vary across models. **We stress that the focus of our contribution is not to optimize for runtimes, but rather to find the best-fitting model possible in order to draw accurate and reliable scientific conclusions.** # (B) Clearly addressing limitations Reviewers rveQ and 3Mtv suggested more clearly presenting the limitations of the gpSLDS. We agree that this would be a valuable addition to the paper, and we will include a paragraph discussing the following points: 1. Quadrature and scalability: As mentioned above, because we use quadrature methods to compute kernel expectations that are not available in closed-form, it could be challenging to scale the gpSLDS to settings with large latent dimensionalities. One possible extension could be to explore using Monte Carlo methods to approximate these expectations in higher latent dimensions. 2. Inference algorithm alternatives: One area of potential improvement for the gpSLDS would be to incorporate more recent methods for latent state inference in GP-SDEs [Verma et al. (2024); Course and Nair (2024)]. In particular, Verma et al. (2024) proposed an algorithm for inferring $q(x)$ akin to gradient descent, which they show achieves faster and more stable convergence compared to the fixed-point iteration method we use in our paper [Archambeau et al. (2006)]. While Verma et al. did not originally consider inference over $q(f)$, their method could be directly plugged into the inference algorithm for the gpSLDS. **Again, we thank all of the reviewers for their positive comments and constructive feedback!** – The gpSLDS authors Pdf: /pdf/8b0d2c114e1e8bf263184b52ba4f8d02b0a56edd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Talking Heads: Understanding Inter-Layer Communication in Transformer Language Models
Accept (poster)
Summary: This paper investigates the interaction between attention heads at different layers in a transformer. They primarily study the “inhibition-mover subcircuit”, a previously identified attention head interaction from circuit analysis work [1,2]. They show the interaction between heads can be characterized by a low-rank subspace, and show it is helpful in reducing prompt sensitivity on a simple item-recall task. [1] Wang, et al. Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small. 2023. (https://openreview.net/forum?id=NpsVSN6o4ul) [2] Merullo, et al. Circuit Component Reuse Across Tasks in Transformer Language Models. 2024 (https://openreview.net/forum?id=fpoAYV6Wsk) Strengths: - The work is well-aware of existing related literature, and the questions posed are interesting. - The work is well-motivated - current LLMs are often not robust to prompt variations, and understanding and mitigating the reasons for this is of value. - The experiments demonstrate that the identified low-rank subspace can be used to effectively improve model performance on the item-recall task they introduce. Weaknesses: - The study is limited to fairly small transformers (GPT2-small, and Pythia-160m in appendix), and it’s hard to know whether the results will generalize to more complicated tasks or methods will scale to larger models. Additionally, It would also be helpful to know whether larger models also struggle with the item-recall task (and whether they have similar inhibition-mover subcircuits). - The composition score presented in Equation 1 should be presented in context (i.e. seems to have come from Elhage, et al. [3]? Is there earlier use of this score elsewhere?). Besides substituting individual SVD components in for QK or OV matrices, are there any other changes compared to how [3] measure the composition score? - It is unclear to me whether the failure mode of GPT2 small on the item-recall task is due to (a) not correctly identifying duplicated objects, or (b) not suppressing correctly identified duplicated objects when going to copy. This is because Figure 4 shows the attention pattern of mover heads can be influenced by either duplicate token head channels or inhibition head channels. Is this because the duplicate token channel influences the inhibition signal? (i.e. its effect on the mover head is mediated by the inhibition head?) - There are some portions of the paper that present results without experimental details. One example is on Lines 228-231: "On OpenWebText ... we find inhibition heads are primarily active in lists and settings where repetition...". Providing details about the experiments run to validate this claim would help strengthen the argument made. In addition, what does it mean for an attention head to be "active" on text or not? Is this measured by its attention pattern or something else? [3] Elhage, et al. A Mathematical Framework for Transformer Circuits. 2021 (https://transformer-circuits.pub/2021/framework/index.html) ___ There are a few places with incomplete sentences or minor typos. I've listed a few I noticed below: - Line 174-175 - “... test how this affects.”, (incomplete sentence) - Line 187 - “communicatoin” channels - Line 214 - “non” - Line 276 - “repersent” - Line 290 - “featuers” - Line 506 - "These results are in", (incomplete sentence) Technical Quality: 3 Clarity: 3 Questions for Authors: - When you say a head is “highly active” on a prompt or task [e.g. Line 229, 255], what is meant by that? Is this measured by its attention pattern or something else? When would an attention head be "non-active"? - When using a subspace to steer the inhibition score, the singular values needed to flip attention from IO to S (or vice versa) seem rather large. What are the typical ranges of the singular values you see in your decomposition of the weight matrices you’re investigating? - As one of the other well-studied/well-known subcircuits, have you done any analysis to understand whether the induction subcircuit is also dominated by a low-rank subspace? Or is this specific to the inhibition-mover circuit you study? - In section 5, how does trying to learn optimal singular vector weightings either via gradient descent or regression compare to doing a grid search over 3-d points? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have addressed limitations of their work, which are reasonable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The study is limited to fairly small transformers (GPT2-small, and Pythia-160m in appendix), and it’s hard to know whether the results will generalize to more complicated tasks or methods will scale to larger models. Additionally, It would also be helpful to know whether larger models also struggle with the item-recall task (and whether they have similar inhibition-mover subcircuits). Thank you for raising this. Please see our rebuttal which primarily focuses on this point. In summary, yes larger models struggle with this task despite the simplicity and we argue this is an isolated mechanism that is part of a very basic and general language modeling mechanism learned by the model. > The composition score presented in Equation 1 should be presented in context (i.e. seems to have come from Elhage, et al. [3]? Is there earlier use of this score elsewhere?). Besides substituting individual SVD components in for QK or OV matrices, are there any other changes compared to how [3] measure the composition score? As far as we know, the composition score does not appear anywhere before Elhage et al. And that is correct, this iso our only modification to get it to work > It is unclear to me whether the failure mode of GPT2 small on the item-recall task is due to (a) not correctly identifying duplicated objects, or (b) not suppressing correctly identified duplicated objects when going to copy. This is because Figure 4 shows the attention pattern of mover heads can be influenced by either duplicate token head channels or inhibition head channels. Is this because the duplicate token channel influences the inhibition signal? (i.e. its effect on the mover head is mediated by the inhibition head?) The reviewer is correct in their last statement, it is because the duplicate token head affects the inhibition head, which affects the mover head. This is established in the circuit identified by [Wang et al., 2023](https://arxiv.org/abs/2211.00593). If this is still confusing to the reviewer we can go into more detail. Also to be clear, when we do the edit here, we are only editing the duplicate token subspace at the point the inhibition head sees it, so it is following the logic of duplicate token head-->inhibition head-->mover and **not** duplicate token head-->mover head (where --> is read as 'affects'/'changes'). We expect a lot of familiarity with previous work, which is not entirely fair to all readers, so to remedy this we added a section to the appendix outlining the entire IOI circuit in GPT2 from that paper (as well as the circuit we find for IOI on pythia). > When you say a head is “highly active” on a prompt or task [e.g. Line 229, 255], what is meant by that? Is this measured by its attention pattern or something else? When would an attention head be "non-active"? Yes this is measured by the attention pattern. We apologize for the lack of details, please see the rebuttal for an operationalization of these terms and how this analysis was carried out. They will be included in the final draft. > As one of the other well-studied/well-known subcircuits, have you done any analysis to understand whether the induction subcircuit is also dominated by a low-rank subspace? Or is this specific to the inhibition-mover circuit you study? We also looked at the induction head in Figure 6 (left) and Appendix H and found that the one that we looked at was not low rank, so not every head composition is like this. However this is not specific to the inhibition-mover/duplicate-inhibition subcircuits because we found plenty of other compositions between random heads that were similarly sparse. Unfortunately, we didn't have space to also analyze those and assign function to them because they are not part of any known circuits (like the IOI components). This is a very interesting direction for future work. We'd be happy to discuss more with the reviewer about what information we could/should include regarding this, though. > In section 5, how does trying to learn optimal singular vector weightings either via gradient descent or regression compare to doing a grid search over 3-d points? This is a very cool idea that we did not consider when writing/have time for in the rebuttal. We kept the 3-d points analysis extremely basic. Would this involve optimizing directly in that 3d space? Please re-raise this point during the discussion if the reviewer finds this an interesting addition to the paper. We hope we have resolved all of the reviewer's concerns which we believe we properly addressed in the rebuttal document or here, by clarifying details from the paper. --- Rebuttal Comment 1.1: Title: Thank you for your reply Comment: I have read your response, as well as the other reviews. Thank you for the additional information regarding results on larger models and your viewpoint on how the simple recall task may relate to more general language modeling capabilities. I do think the other reviewers have brought up some valid concerns, (e.g. limited related work discussion, generality of the findings), but I think the authors have done an adequate job of trying to address most of them. I think the authors’ proposed changes and clarifications (along with cleaning up of the typos/ and general presentation) will strengthen the paper. As it stands, I think the paper would be an interesting contribution to the conference and would like to keep my score. --- Rebuttal 2: Title: Thank you for the reply Comment: Thank you for considering the rebuttal and other reviews, and suggestion to accept. We agree with the remaining points on related work and will make sure adequate space is provided in the camera ready.
Summary: The paper investigates the communication between attention heads across different layers in Transformer-based language models. First, it establishes that a previous composition metric, which has been shown to be useful in toy settings, is noisy in larger models when tested on the Indirect Object Identification (IOI) task with a known circuit. The authors then propose a modification using Singular Value Decomposition (SVD), rewriting the original matrix as the sum of outer products of the left and right singular vectors, scaled by the corresponding singular values. This new metrics reveals inter-layer communication more clearly and provides a less noisy signal. This is causally verified by zeroing out a single singular value on the IOI task which reduces the inhibition score notably. The authors then study these specific communication channels in detail, showing how interventions in these channels can affect the model's downstream behaviour. Interestingly, they demonstrate that this mechanism is independent of the specific context, functioning as a general pointer over lists. Finally, the authors use these insights to study the seemingly arbitrary sensitivity of language models to the order of items in lists using an artificial laundry list task. They show that as the list length grows, the internal structure in the communication partly collapses and makes it hard for the model to select the correct item. This can in part be addresses using a custom intervention, again demonstrating the causality of this phenomenon on the model behaviour. Strengths: - This paper addresses a fundamental question in interpretability of transformer-based language models, namely how attention heads selectively communicate with each other. Although it was known from prior work that attention heads compose with heads in other layers, to the best of my knowledge, this is the first in-depth study of inter-layer communication in large language models. - The authors use a variety of techniques to systematically test and causally verify their hypotheses; I also find the use weight-based decompositions instead of activation-based techniques intriguing, as it enables static analysis of neural networks without the need for input data. - The findings are connected to previous observations on language model robustness, explaining a seemingly arbitrary sensitivity of language models to the order of items in lists. Weaknesses: - The proposed composition score does not work for models using relative positional embeddings like RoPE, which are standard in state-of-the-art language models such as Gemma [1]. This makes the composition score not directly transferable. - The results could be significantly strengthened if the technique was used to conduct an unsupervised search for head compositions within the model, rather than focusing solely on the IOI task with a known circuit. Minor issues: - Typo in line 11: “via analysis *of* their weight matrices” - Typo in line 176: “the OV [or] matrix” - Typo in line 187: “communicatoin” - Overlapping image in Fig. 5, covering part of the “10" [1] G. Team et al., ‘Gemma: Open Models Based on Gemini Research and Technology’, arXiv [cs.CL]. 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you elaborate on the intervention used to improve model performance on the laundry list task? Specifically, what operations do you perform to "set the model components in a certain area of the 3D space"? 2. Can the proposed composition score be used to identify compositions of attention heads without prior hypotheses? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I believe the limitations are properly addressed, although the most important limitation - that the composition score is not directly applicable to models with relative positional embeddings - is somewhat hidden in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. We are happy that the reviewer enjoyed the paper and we appreciate the encouraging comments regarding static weight analysis, we think there is a lot of interesting work to be done in this area. > The proposed composition score does not work for models using relative positional embeddings like RoPE, We'd like to point out that this is only for query/key composition and does not occur for composition between values and outputs. Pythia uses RoPE and we cover positive results on this model using composition in the appendix. We use the composition score to do static circuit analysis on part of the Pythia IOI circuit, which was not previously known before (which we later validate with path patching). See the below point. See also the rebuttal for more info. > Can the proposed composition score be used to identify compositions of attention heads without prior hypotheses? Yes absolutely, the compositions can be taken between pairs of heads, and outliers can be searched for. From there, new connections can be found and later analyzed on data. We had to do IOI circuit analysis on pythia to find inhibition heads, and the composition score was used for part of this. After starting the circuit analysis and finding the inhibition heads, the value composition score revealed the induction heads that connect the circuit, which path patching later confirmed > The results could be significantly strengthened if the technique was used to conduct an unsupervised search for head compositions within the model, rather than focusing solely on the IOI task with a known circuit. Thank you for raising this. We definitely hear the reviewer on this, but we're not sure how we could fit this in without it disappearing in the appendix. We hope that the above point about our analysis on pythia without a known circuit is satisfactory for the scope of this paper. > Could you elaborate on the intervention used to improve model performance on the laundry list task? Specifically, what operations do you perform to "set the model components in a certain area of the 3D space"? Yes, in plain english, each inhibition head is responsible for one direction (we focus on three of them so that we can visualize the space without dimensionality reduction). We overwrite the output of each head to be some coordinate in its corresponding direction. That is, some scalar times the unit vector corresponding to the inhibition component we previously identified. A very important part of this intervention is that it makes it impossible for the head to attend to any of the previous context (the Q*K operation becomes unused), which is why we are able to say it is content independent: we control which item in the context is being attended to/ignored without letting those heads attend to them --- Rebuttal Comment 1.1: Comment: Thanks for clarifying the intervention details and confusion regarding the applicability to RoPE-based models. I also agree that there is no space in the main paper to perform and discuss unsupervised searches for head compositions and would thus suggest to leave it for future work. I think that other reviewers have raised some valid concerns (e.g. generality of findings, relatively unspecific title), but I still believe it is an interesting, and valuable case study with potentially broader applicability, and that most concerns could be addressed in the camera-ready version. Overall, I would like to see the paper being presented at the conference and thus suggest accepting it. --- Reply to Comment 1.1.1: Title: Thank you for the reply Comment: We are glad the clarification was helpful. Thank you for considering our rebuttal and for your suggestion to accept
Summary: This paper explores the routes and mechanisms of information transfer between heads of transformer large language model. They hypothesize the presence of low-ranking communication channels between attention heads from different layers within residual connections. The authors propose a method based on Singular Value Decomposition of weight matrices to detect those channels in a pretrained model; this method doesn’t require any additional data. They show that this method can uncover intricate and interpretable intrinsic structures of a transformer, which in turn can be manipulated to targetly adjust the model’s performance. The authors illustrate it by significantly improving the performance of GPT-2 on the list item recall task. Strengths: •) Very interesting and novel idea. •) Good analysis; promising results that can facilitate further progress in interpretability and understanding of the innerworkings of large language models. •) The article is well-written, and the text is easy to follow. The figures are nice and illustrative. Weaknesses: 1) The proposed method of Communication Channel Interventions improves model performance on a synthetic task (Laundry List); however, it is not clarified how such modification will affect model's performance on other tasks. 2) The contextualization in the research field (Section 6, Related Work) is short and covers rather little information on previous works about information passage within transformer models and attention mechanism. 3) The limitations of the method do not allow for a straightforward implementation on newer models with relative positional embeddings. 4) [Minor] The possible practical applications of the achieved results are outlined vaguely. Technical Quality: 2 Clarity: 3 Questions for Authors: 1) You identify several types of attention heads: mover, duplicator, and inhibitor, but could a single head exhibit traits of multiple types? Also, is this classification not exhaustive (i.e., there are heads that do not fall into any of the aforementioned categories)? 2) If there are two interventions and each of them is beneficial (i.e., increases the model's performance) on its own, how often is their composition (i.e., applying them simultaneously) beneficial? Is it possible to somehow predict such pairs of interventions without directly computing the quality of each pair? 3) How many heads were affected by the interventions in your experiments in Section 5? Do you have information on how the increase in performance depends on the number of modified heads? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Authors clearly outline the limitations of their work in the dedicated section (in Appendix B). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. We are very happy to hear that the reviewer found the results interesting, promising for facilitating future work, and the paper easy to follow. The reviewer had some concrete questions and concerns about the generalizability of the method to new models and results to new tasks. We believe we have satisfactory answers to these below: > The limitations of the method do not allow for a straightforward implementation on newer models with relative positional embeddings. We would like to point out that this is only for composition with queries and keys in models with RoPE embeddings, in case it wasn't clear. Value-Output composition is entirely unaffected, and we have results with Pythia (a RoPE model) in Figure 12. This is certainly still a limitation, but we believe in the future it will be surmountable. In addition, composition with mlps (mlp to attn, attn to mlp, and mlp to mlp) is still a valid path for future work that we didn’t explore here, so we don’t think that our method will be limited to only certain kinds of models. Given the complexity of the mechanisms involved already, we figured it would be too much to find a satisfactory workaround to the query/key problem, work with MLPs, and provide our current analysis all in one paper. We'd also like to point the reviewer to the rebuttal which addresses some related points. > The proposed method of Communication Channel Interventions improves model performance on a synthetic task (Laundry List); however, it is not clarified how such modification will affect model's performance on other tasks. We would argue that the mechanism we uncover is an extremely general component in broader language modeling capabilities and possibly affects most tasks that involve some form of recall. Our evidence for this is the activity on both the IOI and Laundry list tasks (which would seem to coincidental otherwise), the content independence of the mechanism (see Line 281), and the consistent general activity on open domain text (see rebuttal document). In our rebuttal document we outline some evidence for this idea We will use the rest of the space to answer other questions: > How many heads were affected by the interventions in your experiments in Section 5? Do you have information on how the increase in performance depends on the number of modified heads? We use three of the four inhibition heads (7.9, 8.6, 8.10). We chose to leave out the fourth (7.3) because we wanted to be able to plot the results in 3D without dimensionality reduction (Figure 5) since that often can’t be done. As for why we chose 7.3: per Line 182: “changing 7.3 does not have a strong effect on its own” (see Figure 2). We saw some marginal increase by also including 7.3 but it was computationally expensive to run the full sweep. Based on the evidence we do have, the scores would probably be at baseline if we performed an analogous intervention on three random heads, though. > You identify several types of attention heads: mover, duplicator, and inhibitor, but could a single head exhibit traits of multiple types? Also, is this classification not exhaustive (i.e., there are heads that do not fall into any of the aforementioned categories)? We did not find, but don’t rule out that a single head could exhibit traits of multiple types. This classification is not exhaustive, and we do briefly study induction heads in the appendix. Still, we are excited by recent and evolving work in characterizing other types of observed specializations in heads (e.g., [successor heads](https://arxiv.org/abs/2312.09230)). --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions and for clarifications provided; I have also read other reviews and responses to them. I would suggest you somehow highlight in the main text the fact that some of the detected phenomena (Value-Output composition) are unaffected by addition of RoPE (and probably other methods of relative position encodings) to the model, because, in my opinion, this would greatly increase the actuality (and interestingness) of the presented findings to the readers. I still have some concerns regarding the `side effects' of Communication Channel Interventions. It may be exceeding the scope of this paper, but I would encourage you to evaluate the performance of the edited models at text generation (although it may be hard to define a formal setting), because it might so happen (this is an intuitional guess) that inhibition channels have an important role during the generation of larger texts. Finally, I think that the method proposed in the paper is quite interesting, and the reported results can be valuable for future research in the field. I think that changes proposed by authors will strengthen the paper, and I will update my ratings and raise the overall score. --- Reply to Comment 1.1.1: Comment: We are happy to hear the clarifications improved the perception of the paper. Thank you for the suggestions, we will definitely specify that RoPE does not affect Value composition. > It may be exceeding the scope of this paper, I would encourage you to evaluate the performance of the edited models at text generation (although it may be hard to define a formal setting) We can make this part of the appendix that describes the inhibition heads' role on open domain text, and use the examples we find as targets for interventions. Since there was some interest from reviewers in this kind of thing, we think it could be useful, and doesn't necessarily overextend the paper. Thank you again to the reviewer for their help and effort in improving this paper
Summary: Building upon prior work in Transformer circuits (Elhage et al., 2021; Wang et al., 2022), the paper identifies novel, low-rank communication channels between attention heads across layers via the Composition Score (a generalized cosine between the read-out weights of a lower-level layer and the read-in weights of an upper-level layer) between _low-rank_ factorizations of the weight matrices (obtained by SVD). The low-rank decomposition is critical, resolving the known issue of the vanilla Composition Score being very noisy. Generally focusing on the role of inhibition heads, which are known to prevent the copying of certain tokens in the prompt, the paper identifies a low-rank communication channel for inhibition and validates its role on a synthetic "Laundry List" task. The task requires recalling an item from a list of items specified in the prompt. It is shown that, when the identified (rank-1) inhibition channel is zeroed out (intervention), the model effectively stops passing the "inhibition signal." The role of the channel is validated for indirect object identification (IOI) and token indexing tasks in the GPT2-small model. Strengths: - Using the SVD as a remedy for fixing the noisy Composition Score is an intuitive and seemingly effective strategy. - The paper’s main contributions, including the method and the new synthetic dataset, are reasonably well-motivated. - The empirical results make sense (for the most part) and support the claim of low-rank communication channels for inhibition heads. It is interesting to see that the inhibition channels are content-independent (just pointing at names in the IOI task) or that it affects the accuracy on the laundry list task. Weaknesses: - Overall, while the paper is well-motivated and includes potentially interesting results, its presentation significantly hinders the reader from understanding the main takeaways of the paper. - First, a key piece of related work is the composition analysis of two-layer, attention-only models by Elhage et al. (2021), but the review of the relevant background work is simply too terse and unorganized. Terms like inhibition heads, inhibitor-mover subcircuit, value composition, and the QK/OV circuits (e.g., which weight matrices are we talking about exactly? why are they “low-rank”, especially when it’s not as low-rank as the subspaces we’re discussing here?), are not formally defined. Even things like why the composition score looks the way it is (why is it a matrix multiplication?) or the basic shapes of the weight matrices ($d$ by $d_h$ or $d_h$ by $d$?) involved are not written out in the “background” section. Without these in place, I think the paper is borderline incomprehensible to anyone outside the mech interp community. - Second, all the overclaims in the paper really hurt the overall message. The paper’s main thesis could simply have been something like “using the SVD in the Composition Score allows us to identify low-rank communication channels, as showcased by the identification of inhibition head channels,” which would be clear and interesting. But instead, the paper keeps trying to convey a broader, unsubstantiated message that their methods and experiments somehow are more novel and general than they should be. - “Understanding Inter-Layer Communication”: Exactly what is the scope of this inter-layer communication? Is it primarily/always from layer $l-1$ to $l$, or something more? - ”… using subspaces we call communication channels” (Intro): this terminology already existed in Elhage et al. (2021), but this phrasing makes it sound like the current paper came up with it. - “model components like attention heads” (what other components does this paper cover?) - “Three types of composition”: It appears that most of the focus in the paper, including the dataset itself, is on inhibition heads. - “Composition Score”: It should be clear to the reader that the score was first developed by Elhage et al. (2021), especially in Section 3.1. More generally, the intro should do a better job of clearly disambiguating what was introduced already in the literature and what this paper is adding to it. - Lastly, the writing is simply not well polished, and there are many typos and unnatural sentence structures that make it even more difficult to understand the overall paper. Just to name some of them: - Abstract, line 3 & line 11 - Intro, line 34 (\citep) - Page 4, line 114 - Page 4, line 128 (d \times d) - Figure 2 caption, second sentence - Page 5, lines 170—172 (broken parallelism) - Figure 3 caption, second sentence - Page 9, line 276 There’s also a related issue of overusing informal analogies unnecessarily (attention heads “talking” to each other). - Aside from the presentation, the biggest confusion for me is: generally, what implications do we have about inter-layer communication in Transformers from these findings? Besides inhibitions, what compositions do we expect to be represented as low-rank subspaces according to the paper’s approach? How far across layers can the channels be? Do we have any sense of what the rank tells us about the feature? The discussion section should be expanded substantially to at least mention or address some of these points. - In terms of the proposed Laundry List task, a possible caveat is that models more recent (and larger) than GPT2-Small may actually do a good enough job of solving them. I am sympathetic to the fact that it is challenging to redo many of the analyses on larger models, but I think it is worth, at least, to test the accuracy (Fig. 1, left) on more recent, open-weight models, to see how much the motivation in the introduction is still relevant. - The related work on SVD seems terse, knowing that SVD is arguably one of the most basic linear algebra tools in ML (any method based on PCA would also be relevant, for example). - In the intervention experiments, the choices of the exact subspace (the number of components) and the intervention method (e.g., scaling diagonally for 2D) appear a bit heuristic and need better justification. Technical Quality: 2 Clarity: 1 Questions for Authors: - Are there other baseline methods to be considered beyond the vanilla and SVD-based Composition Scores for identifying communication channels? - Page 6, lines 217--218: How exactly is the 2D duplicate token head found? - Page 7, line 227: In what sense are these results “strong”? Is there a baseline? - Page 7, lines 228--232: any quantitative results for this claim? - Page 8, line 266: why is it top-3 this time? how should someone replicating your experiments choose this number? - I think the talking heads analogy makes sense, but it’s one that was already used before by [Shazeer et al. (2020)](https://arxiv.org/abs/2003.02436) in a not-so-unrelated context (the paper is fully referenced in the circuits paper, for example). I would suggest changing the title or at least adding a footnote of disambiguation from that paper. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: The paper states that it does not make claims about the different types of compositions across attention heads in different layers. Aside from that, the limitations are not well addressed; see weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. We are glad the reviewer found the methods interesting and well-motivated but disappointed the paper was difficult to read. This is a complicated topic that needs to fit in 9 pages, so it is difficult to catch everyone up and make a significant contribution in that time. Below, we answer questions, outline some proposed changes, and would also like to use this space to push back on the notion that we are overclaiming. After considering our points we would like to hear if the reviewer still feels this way/what they specifically feel is being overclaimed. >“Understanding Inter-Layer Communication”: Exactly what is the scope of this inter-layer communication? Is it primarily/always from layer l−1 to l, or something more? We do not find a preference for more recent layers. Please see Figure 2: heads in layer 7 communicate with the mover head in layer 9. The duplicate token head (3.0, see Figure 4) communicates with inhibition head 7.9 (a difference of four layers). > “using subspaces we call communication channels” (Intro): this terminology already existed in Elhage et al. (2021) We use this term to ground the paper to previously established literature but also provide a more specific definition rather than a notion of communication. Elhage et al., is extensively cited in this work, but this point is well taken because it was not our intention to appear to coin this term. This will be reframed in the camera ready to refer to this paper when introducing “communication channels”. > “QK/OV circuits (e.g., which weight matrices are we talking about exactly? why are they “low-rank”” These are low-rank because of their shapes. For example, in one head, the value matrix projects from the embedding dimension (768) to the head dimension (64). E.g., the value matrix is 64x768 dims. This is therefore lower rank than the embedding dim. This has been consistent in the transformer architecture since Vaswani et al,. 2017 so we do expect some familiarity from a general reader. If the reviewer agrees it would be helpful, we would be more than happy to define these terms in a glossary in the appendix. Relatedly, we’d like to point out this isn’t a typo: >Page 4, line 128 (d \times d) We will add something like “V \in d x d_h and O \in d_h x d, so OV \in (d \times d)” to make this very clear > There’s also a related issue of overusing informal analogies unnecessarily (attention heads “talking” to each other). We tried to balance formalisms with intuitive analogies due to the complexity of the work. If the reviewer could, we would be interested in hearing which specific places more rigorous definitions would be helpful. > “all the overclaims in the paper really hurt the overall message” We are generally a bit confused about what the reviewer finds to be overclaimed. Perhaps the above points help clarify our work more. If not we would appreciate some specifics if the reviewer still feels this point is relevant after the rebuttal. > In terms of the proposed Laundry List task, a possible caveat is that models more recent (and larger) than GPT2-Small may actually do a good enough job of solving them. This is a great suggestion, please see our rebuttal doc. We tested multiple n-billion parameter models (up to 6.9B) from different model families and found that the larger/more recent models tend to get better but still struggle. We’d also like to point out some relevant related work: [Liu et al., 2023](https://arxiv.org/abs/2307.03172): highlights the broader inability of LMs to use information when it’s in the middle of a long context. This negatively affects retrieval augmented generation (RAG) systems, which are highly relevant to current SotA systems. We don’t claim this is a failure of the exact same mechanism, but it definitely connects our work to a substantial current problem. This point may also help answer the following: >”[a] confusion for me is: generally, what implications do we have about inter-layer communication in Transformers from these findings?” Our work points to a concrete capacity limit in inter-layer communication that leads to performance degradation on certain recall tasks. This work provides a promising avenue for understanding and improving “lost in the middle (Liu at al) type failures. > “Page 7, line 227: In what sense are these results “strong”? Is there a baseline?” Yes there is a baseline. In Makelov et al., 2023, they examine the IOI dataset. As a reminder, we report 97.5% and 35% with components directly taken from the weights. Their baseline achieves -8% FLDD 0.0% Interchange. By taking the gradient to directly optimize a single vector for this task they achieve 111.5% FLDD and 45.1% Interchange. > “In the intervention experiments, the choices of the exact subspace (the number of components) and the intervention method (e.g., scaling diagonally for 2D) appear a bit heuristic and need better justification.” We do have simple explanations for these: Intervention: We wanted to be able to plot the results in 3D without dimensionality reduction (Figure 5) because we thought it was very interesting to be able to do that. We chose to leave out 7.3 specifically, per Line 182: “changing 7.3 does not have a strong effect on its own” (see Figure 2). We entirely agree that this point is just too covert in the current draft, so we will raise this decision when introducing the interv. experiment Choice of scaling: We use z score thresholding on the distribution of composition scores (>4) to choose which components to scale. On all components this led to choosing 1D except for the duplicate token head which was 2D. This is why we used diagonal scaling. The graphs in Figures 6 and 7 show these components compared to others, and we believe it’s agreeable that these are also visually very clear outliers and the choice is justifiable. --- Rebuttal Comment 1.1: Comment: I appreciate your efforts in responding to the reviews. The responses were generally helpful and they address some of the key concerns I had with the initial draft. Some additional comments: - I think the additional experiment with larger models is a meaningful result and something that should definitely be included in the revision (either in the main text or in the appendix). One follow-up question is how this result relates to the paper’s claim about a “capacity limit” in Section 5. Am I correct in thinking that a way to address 10+ items is to increase the residual stream dimension to the point where the top-k inhibition components give me enough power to address all objects? Or is there something more fundamental about the task that cannot be addressed simply by increasing the model size? I’m still curious as to exactly what parts (if any) of your analyses and implications would be affected by having larger models. - I think the baselines and the details about intervention experiments in your response should be included in the paper (could be referenced in an appendix). - Regarding overclaims, I think your response covered some of my concerns but not all (I already listed 5 specific examples in my original review). To highlight one: I think the title should specify that the communication channels concern compositional operations (or just mention inhibition directly), or that the paper primarily discusses object identification/recall tasks, or anything that is more specific. This is clearly not the first paper to discuss *any* type of inter-layer communication in Transformer LMs. - More generally, I still feel that the gap from “the inhibition-mover subcircuit for object identification/recall tasks” to “how information passed across layers in Transformers” is quite large, and that the paper should acknowledge this rather than giving the impression that the results generalize more broadly. We don’t even know what other types of channels may exist or whether we can still identify low-rank channels using the composition score for those. Again, this is not at all to say the paper’s study is uninteresting, but I just believe the general tone of the paper needs to be improved to focus more on the actual objects/tasks being studied. That said, enhancing the related work section and properly disambiguating from existing terminologies should alleviate some of the concerns. I have updated my ratings to reflect both the original draft and the authors’ rebuttal. --- Reply to Comment 1.1.1: Title: Thank you for the reply Comment: We appreciate the reviewers effort in considering our (lengthy) rebuttal and are glad this had an effect on their perception of our work. > Am I correct in thinking that a way to address 10+ items is to increase the residual stream dimension to the point where the top-k inhibition components give me enough power to address all objects Yes, that is our intuition as well, for one, for one, because we see performance degrade more slowly as model size increases. But another reason is that we expect more inhibition-type components/heads to be present in larger models. This intuition comes from the finding of redundant computations in larger models (e.g., see [Lieberum et al. 2023](https://arxiv.org/abs/2307.09458)). With the additional time we will keep trying larger models until we seem to saturate the task. > To highlight one: I think the title should specify that the communication channels concern compositional operations (or just mention inhibition directly) We are understanding this point a lot more clearly now. We agree this is fair point. We'll think about specific titles more but we can change it to something like "...communication between layers in context recall tasks". We want to retain the key point that this is about layerwise movement rather than inter-residual stream movement through attention (more broadly studied) We will spare going into the every remaining detail, but the remaining related points are well taken and we will reframe where necessary. We will include an appendix showing that the decomposed composition score finds other very low rank outliers that exist, but we don't yet understand functionally (unrelated to the inhibition-mover subcircuit).
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their thorough and thoughtful comments and suggestions. We are glad the reviewers shared our interest in the results on static weight analysis, as we’re hopeful for continued progress in this area, and we’re pleased that there was mostly consensus that are findings supported our hypotheses. After going over the reviews, the main concerns appear related to whether the main mechanism we discuss (related to inhibition) really matters outside of GPT2-small and pythia. We’d like to dedicate the rebuttal to addressing those concerns. # Other Models First, we’d like to briefly discuss and expand on the implications of our findings beyond the small models tested here. In Figure 1 of the rebuttal, we wanted to highlight that larger/more modern models do not solve the object selection problem we use to motivate the paper in Figure 1. This is to make the point that we did not pick a mechanism too specific to make broader contributions. We’d also like to draw attention to [Liu et al., 2023](https://arxiv.org/abs/2307.03172) which shows that even very powerful LMs can struggle with recall from context. More generally, we believe our modified composition score can beyond the models tested here. The concept of composition score (and its decomposed variant introduced here) generalizes to mlp-mlp and attn-mlp interactions as well, which we don’t test here. We also believe that the issue with positional embeddings and query/key composition can be solved in the future. Note that we do have positive results with pythia (which uses RoPE) and value composition in this paper. This paper is simply too full already to include such extensions to the work and are hoping to explore these directions in follow ups. # Other Tasks: There was some concern from a few reviewers that the tasks studied here (and therefore the mechanisms involved) are too narrow. We argue that the mechanism we identify is part of a more basic language modeling mechanism that underlies most tasks that involve recall from context. In Figure 2 of the rebuttal we show examples of the inhibition heads activity . Some reviewers pointed out that we were vague about our methods that led to claims in Lines 228-231, so we outline them below. Combined with the observations about larger models above, we believe our results indicate that although we are targeting an extremely specific mechanism, this is actually part of a more general language model capability present in LMs. Thus, we see our findings and our methods of analysis useful and productive progress towards a greater understanding of the inner workings of LMs. ## Details on Identifying passages that highly activate inhibition heads Some reviewers pointed out that we were not specific about how we arrived at the claims in Lines 228-231. We would like to provide our methodology which was quite rigorous. First, we split passages from OpenWebText-10K into token sequences of length 256 and run them through the model. We cache the attention patterns of the inhibition heads and manually examine tokens for which the attention score is >=0.3 for any token besides the very first. We use this to define “highly active”. We examined about 200 of these passages manually and found the patterns to be extremely consistent. The heads would almost always attend from some token like “and” to some other token that would most likely be a repetitive continuation, e.g., in “Crime and”, the “and” would attend to “Crime” presumably to inhibit “Crime and Crime” from being generated. Figure 2 left shows a screenshot of the interface we built to examine these examples when preparing this manuscript. Note that these were not cherry picked, they are the first examples from the dataset that arise from the .3 threshold. We'd like to add a section in the appendix for further analysis of inhibition heads on open data # Other edits We’d lastly like to thank the reviewers for their attention and specific feedback on the presentation of the results. This paper has many small details, and we appreciate the patience that was needed to be able to provide such low-level feedback. After implementing reviewer edits, we have some space left on page 9, which we will dedicate to discussion and expanding related work, which was suggested. Pdf: /pdf/155608197d168ffdf0fff6b7f4151c864d3d2045.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper primarily employs low-rank communication channels to elucidate how internal layers within the Transformer model transmit information. Initially, the study utilizes existing research to identify duplicate heads, inhibition heads, and mover heads. The objective is then to explore the interactions among these heads. The findings indicate that directly incorporating these elements into the computation of the Composition Score does not yield significant results due to the complexity of the signals involved. Consequently, the paper proposes the application of Singular Value Decomposition (SVD) on the weight matrix, with subsequent sorting by variance, to identify the principal read/write subspace. This approach identifies such communication channels solely through the model's weight file. Furthermore, in certain recall tasks, such as the Indirect Object Identification (IOI) and Laundry List Task, this paper leverages the low-rank characteristics of communication channels to enhance task accuracy. Strengths: 1. This article provides a detailed background to facilitate readers' understanding of prior research. 2. The article presents a hypothesis, verifies it through experiments, and conducts further in-depth research, thereby demonstrating a complete scientific research process. 3. To validate the proposed hypothesis, the article also develops a Laundry List Task for specific experimental verification. Weaknesses: 1. As part of its contributions, this article presents a method for identifying nearly complete IOI circuit signals in GPT-2 Small directly through the weight file. However, the instructions provided in lines [148-152] are too brief, making the specific method difficult to understand. 2. It is well known that the matrix of the attention head is sparse, which naturally suggests using Singular Value Decomposition (SVD) to compress the original matrix. Although the primary aim of this article is to identify subspaces, there is no fundamental difference between the two approaches. Numerous related works, such as LoRA, have applied SVD to the original model weights. Moreover, the conclusion that inter-layer communication is low-rank seems evident due to the inherent low rank of the attention head weight matrix. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The primary experiments in this paper were conducted on the IOI and Laundry List Task. Notably, the Laundry List Task has significant limitations in excluding repeated words. Inhibiting repeated words can only be considered a part of the recall task. Is it possible that recall tasks are the ones where repeated words appear most frequently? Some tasks should enhance the inhibitory effect on repeated words, while others should reduce it. Therefore, the experimental tasks in this paper are insufficient. 2. In Figure 2, the Inhibition Score is presented, but it is not introduced until Section 3.3.1, which may confuse readers. The use of \( V \) in Equation 2 may confuse readers with the \( V \) matrix in the Transformer model. There is a spelling error in the caption of Figure 3: "inhiibt" should be "inhibit." 3. The model-editing method proposed in this article involves performing SVD decomposition on a specific head (the inhibition head) and retaining the main subspace components. This approach improves performance on the Laundry List Task, indicating an enhanced inhibitory effect on repeated words. The intervention method in this article primarily demonstrates that other signals unrelated to inhibitory ability exist within the inhibition head, and that dimension reduction can enhance inhibitory capacity. It should be noted that identifying the inhibition head is not a contribution of this article; rather, it relies on the conclusions of previous work. This article merely reinforces that the inhibition head functions as an inhibitory mechanism and that its combination with the duplicate head improves the inhibition of repeated words, a conclusion already established in prior research. Additionally, this article does not propose a new mechanism for inter-layer communication in the Transformer model. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The author has not fully addressed the limitations of their research. While the proposed method improves the model's inhibitory effect on repeated words, it primarily relies on existing findings and does not introduce a new mechanism for inter-layer communication in the Transformer model. Additionally, the experimental tasks used in the study, such as the Laundry List Task, have notable limitations, enhance the inhibitory effect on repeated words, while others maybe should reduce it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > “It is well known that the matrix of the attention head is sparse, which naturally suggests using Singular Value Decomposition (SVD) to compress the original matrix… the conclusion that inter-layer communication is low-rank seems evident due to the inherent low rank of the attention head weight matrix” Attention heads are inherently low rank because of the size of the matrices (e.g., 64x768, projecting down to 64 dims). This is much different from the low rank subspaces we identify here, which are 1d or 2d. We disagree with the claim that this is somehow self-evident, because we find that not all compositions between communicating matrices have this property. For example, see FIgure 6, left. We know that previous token head 4.11 composes with induction head 5.5 from prior circuit analysis (Wang et al., 2023), but they seem to do so with nearly the full-rank space of the attention head. > “Numerous related works, such as LoRA, have applied SVD to the original model weights.” Does the above point address the concern about the connection to LoRA? If not, could the reviewer please expand on this? We are familiar with some of the work on LoRA that is relevant but we are not aware of any work that implies the results we find in this paper. > “the instructions provided in lines [148-152] are too brief, making the specific method difficult to understand.” Thanks for raising this. We take the distribution of composition scores between component matrices and another matrix (such as mover head 9.9) and take the highest z score components (the outliers). We do not determine an optimal threshold in this work, but we use >4 (upon visual inspection the outliers are extremely obvious: see Figure 6 middle and right). > “In Figure 2, the Inhibition Score is presented, but it is not introduced until Section 3.3.1, which may confuse readers. The use of ( V ) in Equation 2 may confuse readers with the ( V ) matrix in the Transformer model. “ Thanks for pointing this out, we will address this in the camera ready by introducing the term earlier. > “This article merely reinforces that the inhibition head functions as an inhibitory mechanism” No other work has been able to find a known circuit within a model solely from the weights without running the model. We think this is a significant contribution to the field. We focus on a well known circuit to establish credibility without too much overhead of verifying a new circuit. We localize the inhibition mechanism to a few dimensions in the attention heads. We establish that traversing this space (by intervening on the outputs of these heads) controls the position of the token inhibited completely independently of the content of the token. That is, the edit we make is ‘ignorant’ to the preceding context, yet the downstream effect is predictable. This has not been established by previous literature > “this article does not propose a new mechanism for inter-layer communication in the Transformer model.” The new mechanism is the way that this is implemented in the model, with extremely low rank signals. ‘Bandwidth’ in the residual stream has been hypothesized as being in high demand (Elhage et al., 2021), but it has not been established how these signals are passed. We thoroughly justify the claim that this is one such method native to the transformer (see point 1). We believe we have clarified the points raised by the reviewer. This paper makes heavy use of the appendix for supporting figures, which we know reviewers are not necessarily responsible for, so we have provided some pointers here. We hope the reviewer considers these in their determination of the paper.
null
null
null
null
null
null